atypdev | Post | Cyberspace: The Black Box
<< go back
In the realm of artificial intelligence, the “Black Box” approach has captivated both curiosity and concern. This personal project, titled “Cyberspace: The Black Box,” seeks to delve into the enigmatic inner workings of AI systems, particularly their decision-making processes. By exploring the challenges posed by this lack of transparency and the implications it holds for understanding AI decisions, we aim to shed light on the complex “language” spoken within these black boxes and unravel the mysteries they contain.
“Our habits and our fears make our choices. We are an algorithm of ourselves–if you liked that you may also like this.”
-> Jeanette Winterson, The Gap of Time
The “Black Box” approach to training an artificial intelligence (AI) refers to the lack of transparency in the decision-making processes of AI systems. The inputs and outputs are highly visible but the inner workings, the part that turns a model into a tool is not. It is extremely difficult to understand what “language” these in-betweens speak, making is hard or sometimes impossible to understand how the system arrived at a specific decision.
I hold issue with this approach.
Private datasets used to train AI systems often contain sensitive information, such as personal data obtained from a crawl of websites. This, along with other factors, introduces bias into the set of data (Dataset) used to train the AI. If a dataset used to train an AI system contains biased information, the AI system will also be biased. This can lead to unfair and discriminatory decisions, such as denying loans or employment opportunities based on race, gender, or other sensitive factors. As a result, it is difficult to determine the accuracy and fairness of AI systems when they are trained on these datasets.
Additionally, the “Black Box” approach can also limit the power of groups and organizations to find problems with their AI systems. Errors, bias, or unintended consequences can lead to negative consequences for individuals and communities, and harm the reputation of the organizations that use AI systems. For example, if an AI system is making decisions that are unfair or discriminatory, it may be difficult to detect this without understanding how the system is making its decisions. It is essential for these groups and organizations to adopt an approach to AI that is transparent and accountable, so that they can ensure that their AI systems are making fair and accurate decisions.
This approach also poses a challenge regulatory compliance, such as the European Union’s General Data Protection Regulation (GDPR). This regulation requires organizations to be transparent about how they collect, store, and use personal data, and to provide individuals with the right to access, correct, or delete their data. In the context of AI, the lack of transparency in the decision-making processes makes it difficult for organizations to comply with these regulations and to demonstrate accountability for the decisions made by their AI systems.
The use of ethical and responsible data collection practices help minimize bias in training datasets, and ensure that AI systems are not causing harm.
It all comes down to:
- Transparency, or how easy it is to explain the algorithms behind an AI
- Elimination of Preconceived Notions, or the removal of personal bias from datasets
- Total and Absolute Freedom of Information, I want documentation and I want it now!
Through these fundamentals, we can begin to trust our artificial companions a little bit more.