In an effort to ease worries about damaging material on its applications, Instagram's parent company Meta said on Thursday that the app would test features that hide messages including nudity to protect teenagers and stop possible fraudsters from contacting them.
The software giant is under increasing criticism in the US and Europe due to claims that its applications were addictive and contributed to youth mental health problems.
According to Meta, the safety function for direct messaging on Instagram will utilize machine learning on the smartphone to determine if a picture transmitted over the service includes nudity.
Users under the age of eighteen will automatically have the function enabled, and Meta will alert adults to urge them to do the same.
Nudity protection will also function in end-to-end encrypted conversations, where Meta will not have access to these photographs unless someone decides to report them to us, the firm said. This is because the images are examined on the device itself.
In contrast to Meta's Messenger and WhatsApp applications, Instagram's direct chats lack encryption; nevertheless, the firm has said that it intends to provide encryption for the platform.
Additionally, Meta said that it was testing new pop-up notifications for users who could have engaged with accounts that might be involved in sextortion schemes and that it was developing technologies to assist in identifying such accounts.
The social media behemoth said in January that it will block more material from minors on Facebook and Instagram. The purpose of this move was to make it harder for teenagers to encounter sensitive information, such as images of eating disorders, suicide, and self-harm.
Attorneys general from 33 U.S. states, including New York and California, filed a lawsuit against the firm in October, claiming that it had routinely deceived the public about the risks associated with its platforms.
The European Commission has inquired about Meta's efforts to shield minors from dangerous and illicit material in Europe.
The software giant is under increasing criticism in the US and Europe due to claims that its applications were addictive and contributed to youth mental health problems.
According to Meta, the safety function for direct messaging on Instagram will utilize machine learning on the smartphone to determine if a picture transmitted over the service includes nudity.
Users under the age of eighteen will automatically have the function enabled, and Meta will alert adults to urge them to do the same.
Nudity protection will also function in end-to-end encrypted conversations, where Meta will not have access to these photographs unless someone decides to report them to us, the firm said. This is because the images are examined on the device itself.
In contrast to Meta's Messenger and WhatsApp applications, Instagram's direct chats lack encryption; nevertheless, the firm has said that it intends to provide encryption for the platform.
Additionally, Meta said that it was testing new pop-up notifications for users who could have engaged with accounts that might be involved in sextortion schemes and that it was developing technologies to assist in identifying such accounts.
The social media behemoth said in January that it will block more material from minors on Facebook and Instagram. The purpose of this move was to make it harder for teenagers to encounter sensitive information, such as images of eating disorders, suicide, and self-harm.
Attorneys general from 33 U.S. states, including New York and California, filed a lawsuit against the firm in October, claiming that it had routinely deceived the public about the risks associated with its platforms.
The European Commission has inquired about Meta's efforts to shield minors from dangerous and illicit material in Europe.