In Brazil, 23 federal ministries and agencies already use artificial intelligence (AI) and machine learning systems to perform functions, according to a survey conducted with the public sector.
Although still in the testing phase, “it is public entities that make use of these systems without the presence of a regulatory framework in force, without risk assessment and little or nothing is known if they have an analysis of possible damage impacts”, criticized Bianca. Kremer, professor of digital law at the Brazilian Institute of Education, Development and Research (IDP), during a panel last Thursday (12) promoted by the Commission of Jurists responsible for subsidizing the elaboration of a substitute on artificial intelligence in Brazil (CJSUBIA).
For experts consulted during public hearings to discuss the impact of AI on public policy formulation and technology regulation, the sector’s legislation will have to take into account the risks of discriminatory biases that could contaminate automated systems; while researchers linked to the area of human rights and social causes have alerted to the fact that technological tools are not neutral or autonomous.
As exemplified by the professor, who cited the research at the beginning of this article, about a project carried out in a province of Argentina in partnership with Microsoft. The system used by the government promised to identify children most likely to become pregnant during adolescence, so that they would be served by specific public policies. According to Kremer, parameters such as ethnicity, place of residence, number of people in the same household used by the system, perpetuated the stigmatization of poor women, leading to “oversized” results and “gross” statistical errors.
One of the many challenges of artificial intelligence
In another audience panel of the day, speakers presented challenges to improve the reliability of parameters of artificial intelligence systems.
Renato Monteiro, who works in the area of privacy and data production at Twitter, said that the ‘right to explanation’ in algorithmic systems is already possible in Brazil under the LGPD. However, it must be weighed against third-party rights (commercial confidentiality and intellectual property guarantee).
This right to which the speaker refers arises in situations where the processing of the holder’s personal data stems from automated decisions.
Although Monteiro has stated that investing in explainability is important when it comes to artificial intelligence, the most impactful applications with technology, according to him, can come from people’s experience, from changing the product and service.
In 2020, Twitter users reported that Twitter’s auto-crop image functionality was not able to identify and center black people’s faces. According to Monteiro, the company suspended the program after a conclusive assessment that there was bias in the identification algorithm.