5 Tips for Business Owners from President Biden’s AI Bill of Rights

President Biden is taking new steps to ensure the rapid advancement of AI technology is well managed. The Biden Administration recently released a plan for a “AI Bill of Rightsa set of five recommendations to ensure that artificial intelligence systems are safe, equitable, optional and, above all, ethical.

Unlike the actual Bill of Rights, this document is not legally binding. Rather, the plan exists to formalize best practices from the major players in the AI ​​and machine learning space. Those actions include ensuring AI isn’t biased due to bad data, providing prompts about when automation is being used, and providing human-based alternatives to automated services, according to Venkat Rangapuram, CEO of data solutions provider Pactera Edge.

These are the five “rights” outlined in the White House plan and how companies must use them when developing and using automated systems.

1. Ensure automated systems are safe and effective.

User safety should always be of paramount importance in the development of AI systems, according to the plan. The administration argues that automated systems should be developed with input from the public, allowing for consultation by diverse groups of people capable of identifying potential risks and concerns, and the systems should undergo rigorous pre-deployment testing and monitoring to demonstrate their security.

An example of harmful AI mentioned in the document cites Amazon, which installed AI-powered cameras in its delivery vans to assess the safety habits of its drivers. The system incorrectly penalized drivers when other cars intercepted them or when other events beyond their control occurred on the road. As a result, some drivers were not eligible to receive a bonus.

2. Protect users from algorithmic discrimination.

The second right addresses the tendency of automated systems to “produce inequitable results” by using data that fails to account for existing systemic biases in American society, such as facial recognition software that misidentifies people people of color more often than white people, or an automated hiring tool that rejects applications from women.

To combat this, the plan suggests using the Algorithmic bias safeguards for the workforce, a document containing best practices developed by a consortium of industry leaders, including IBM, Meta, and Deloitte. The document illustrates steps to educate employees about algorithmic bias, as well as instructions for implementing workplace safety measures.

3. Protect users from abusive data policies.

According to the third right, everyone should have authority over how their data is used. The proposal suggests that designers and developers of automated systems should seek user permission and respect user decisions regarding the collection, use, access, transfer and delivery of personal data. The plan adds that any request for consent must be brief and understandable in plain language.

Rangapuram says designing its automated systems to continuously learn without appearing authoritative is a “tough balance” to strike, but adds that allowing individual users to determine their own level of comfort and privacy is a good first step.

4. Provide users with notices and explanations.

Consumers should always know when an automated system is being used and be given enough information to understand how and why it contributes to outcomes that impact them, according to the fourth right.

Overall, Rangapuram says negative social sentiment towards corporations that collect data could negatively affect the progression of new technology, so making it clear how and why data is used has never been more vital. By educating people about their data, companies can gain the trust of their users, which could lead to those users being more willing to share their information.

5. Offer human-based alternatives and alternative options.

According to the plan, you should be able to opt out of using automated systems in favor of a human alternative. At the same time, automated systems must have human-based backup plans in case of technological problems. As an example, the plan highlights customer service systems that use chatbots to respond to common customer complaints, but will redirect users to human agents to deal with more complex issues.

Consider trying out a self-driving car; while the system may work perfectly, “you’ll still need a steering wheel in case something happens,” says Rangapuram.

Leave a comment

Stay up to date

Register now to get updates on promotions and coupons

Shopping cart

×