Today, virtually every aspect of our lives touches some part of an online network. While this has certainly improved many areas of life itself, such as how we walk around with handheld devices that can deliver us information at any time, it also poses certain risks.
These risks go beyond traditional hacking and data breaches into our bank accounts, for example. There are so many parts of our lives today that are impacted by algorithms used by artificial intelligence (AI). We assume this AI inherently leverages algorithms that are in our best interests. However, what happens when the wrong type of bias enters these algorithms? How may that affect certain outcomes?
What happens when biased algorithms infiltrate AI systems?
To offer an example, on YouTube, an AI algorithm recommends nearly 70% of all videos, and on social media platforms like Instagram and TikTok, the percentage is even higher. Although these AI algorithms can assist users in finding content that they are interested in, they raise serious privacy issues, and there is mounting evidence that some of the recommended content people consume online is even dangerous, due to misinformation, or because it contains a certain perspective that is designed to subliminally sway a person’s political thinking or beliefs.
The creation of a well-rounded, adaptable AI is a challenging technical and social endeavor, but one of the utmost significance. It is understandable how AI could have a negative impact on societal norms and online usage patterns while people are also focusing on the technology’s positive effects. Online sources significantly influence our society, and biases in online algorithms will unintentionally foster injustice, shape people’s beliefs, spread false information, and foster conflict among various groups.
This is where “bad AI” can have truly significant consequences, as it relates to unwanted and/or unfair biases.
Biased AI can adversely affect traffic intersections.
Take traffic intersections, as a real-world example. Long wait times at traffic lights are becoming a thing of the past, thanks to new AI technologies being deployed in markets around the country. These transit priority solutions leverage real-time traffic data so that lights can compensate for changing traffic patterns, keeping the traffic flowing and reducing congestion.
These systems use deep learning, where a program understands when it is not doing well and tries a different course of action – or continues to improve when it makes progress.
Sounds like a great idea, right? What happens if, over time, the AI algorithms embedded in the traffic sensor technology begin to prioritize more expensive vehicles over others, based on biased algorithms designed to recognize that people who drive certain types of vehicles deserve priorities over others? This is where “bad AI” could adversely affect a very important part of our lives.
These AI-powered transit priority systems are part of a larger Intelligent Transportation System (ITS) that leverages the power of connected vehicle technologies. ITS systems are only as good as the agnostic cloud-based data-sharing platforms they operate on, and not all are created equally.
Eliminating bias in AI algorithms
These data-sharing platforms have been proven highly effective, but only when cities and municipalities overseeing transportation systems make them open for proper data sharing where biased algorithms are not allowed to take part. Unfortunately, many municipalities remain locked into contracts with hardware and device providers who claim to operate under “open architecture” yet are unwilling to work under an open data platform, and these cities severely restrict themselves from the true possibilities that a cloud-based platform can provide.
Cloud-based transit prioritization systems take the global picture of a system into account and use unbiased data-centric machine learning to predict the optimal time to grant the green light to transit vehicles at just the right time. They minimize interference with crisscrossing routes and simultaneously maximizes the probability of a continuous drive. More importantly, an agnostic cloud-based platform ensures cities leverage a continuously updated system for maximized transit potential, without bias from unwanted sources.
With this technology now readily available, cities, developers and municipalities have the technology they need to properly accelerate the buildout of intelligent transit networks to benefit everyone in the region, fairly and equitably.
Regions like the city of San José are now leveraging the benefits of AI to improve the delivery of services to its residents. As the city increasingly uses AI tools, it is more important than ever to ensure that those AI systems are effective and trustworthy. By reviewing the algorithms used in its tools, the Digital Privacy Office (DPO) ensures that the city’s AI-powered technology acquisitions perform accurately, with minimal bias, and are reliable. When a city department wishes to procure an AI tool, the DPO follows specific review processes to assess the benefits and risks of any AI system.
For this particular region, LYT and other companies, such as Google, are among the few approved AI vendors to participate in city-wide technology deployments because of unbiased algorithms. As more AI technologies are being developed, it’s especially important for them to be built without biased algorithms, to ensure fair and equitable local municipal services.