Answer:
The first answer is an "adjective", the second answer is "software", and the last one is "safe from attack"
Explanation:
I actually don’t know how to answer this because there’s some reaosn there should be like if someone committed a crime.
Answer:
Social media are among the primary sources of news in the U.S. and across the world. Yet users are exposed to content of questionable accuracy, including conspiracy theories, clickbait, hyperpartisan content, pseudo science, and even fabricated “fake news” reports.
It’s not surprising that there’s so much disinformation published: Spam and online fraud are lucrative for criminals, and government and political propaganda yield both partisan and financial benefits. But the fact that low-credibility content spreads so quickly and easily suggests that people and the algorithms behind social media platforms are vulnerable to manipulation.
As AI's reach grows, the stakes will only get higher. ... by algorithms: what we see (or don't see) in our news and social media ... Consider a recent write-up in Wired, which illustrated how dating app algorithms reinforce bias.
Other algorithms on social media may reinforce stereotypes and preferences as they process and display "relevant" data for human users, for example, by selecting information based on previous choices of a similar user or group of users. Beyond assembling and processing data, bias can emerge as a result of design.
If some one is happy but music reminds him of something sad he would be mad.

It is not really murder. because doctor kills the patient only for his will .
If the patient say to kill himself / herself then doctor kills .
Though euthanasia is not legal in the United States, doctors can assist people to kill themselves.
HOPE IT HELPS YOU.