AI and Ethics: Striking the Right Balance for the future

Artificial intelligence (AI) is reshaping our world at an unprecedented pace. From transforming industries to making our daily trivial tasks interesting, AI’s potential seems limitless. However, with great power comes great responsibility, it is safe to assume that this revolution of technology has brought upon considerable wariness across generations. According to a 2023 Forbes survey, around 75% people across different domains are unwelcoming to integration of AI.

As we navigate this surge of change, it’s crucial to address the ethical considerations. Striking a balance between innovation and responsibility is key to ensuring that AI benefits everyone while minimizing damage. Let’s dive into real-life examples reiterating the stakes.

1. Bias and Fairness

AI models learn from data, and if that data is biased, the outcomes will be compromised. This bias can lead to unfair behaviour across a wide array of domains, especially in areas like hiring, lending, and law enforcement.

In 2018, Amazon scrapped an AI recruiting tool after discovering it was biased against women. The system was trained on resumes submitted over a ten-year period, predominantly from men, leading it to downgrade resumes that included the word “women’s” or names of all-women colleges. This highlighted the risk of AI perpetuating existing gender biases in the workplace.

Instances like this can be neutralised by incorporating more diverse datasets, performing regular “quality checks” of the model through human interference and making the algorithm’s decision-making process transparent (explainable AI).

Since we can never avoid making mistakes or errors altogether we should at least strive for our policies and the models we use to be as transparent and explainable as possible.

2. Privacy and Surveillance

AI technologies involving data collection are always welcomed with a pinch of salt. Breach of data can pose significant privacy risks, especially facial recognition software and predictive models that monitor and track individuals. One might argue that there are multiple positive aspects such as identification of suspects in surveillance cameras and censorship in social media.

In 2019, San Francisco became the first major city to ban the use of facial recognition technology by local government agencies, including the police. This decision came amid concerns about privacy and the potential for misuse of data collected for surveillance purposes.

Giving individuals control over their data is crucial. This includes obtaining explicit consent and providing clear opt-out mechanisms. For example, GDPR in Europe. Giving people a choice goes a long way to instill trust. San Francisco recently overturned the decision in 2024.

3. Autonomy and Decision-Making

The question remains, should AI be restricted from certain domains of decision making?

What is the extent to which AI should be trusted with critical decisions in healthcare, law enforcement, and autonomous vehicles?

A Tesla Model S driver on autopilot mode reportedly went through a red light and crashed into another car, killing two people in 2023. The incident raised questions about the decision-making processes of autonomous vehicles.

On the other hand, AI is also consistently being used to ensure road safety. Motive’s AI- powered dashcam reliably detects unsafe driving behaviour on national highways. Accidents do happen but in the cycle of development, new robust protocols regarding the extent of credibility are being revisited. These protocols try to ensure that safety is always the highest priority by integrating practices like human-in-the-loop, ethical design and continuous monitoring. We also need to put things into perspective as humans also are prone to mistakes and accidents do happen.

4. Long-Term Implications and Governance

It is no secret that the dark side of the internet is up to date with all the advancements of AI. This obviously opens up new challenges to tackle, including the potential for misuse and the need for rigid governance structures to manage their impact.

Deepfake technology uses AI to create realistic but fake videos. This poses significant risks that lead to misinformation and defamation.

Establishing ethics committees to review and guide AI projects provides oversight and ensures ethical considerations are integrated. Many tech companies, including Google, have formed ethics advisory boards. Promoting global cooperation on AI standards can address challenges that transcend national boundaries.

Authors note

As we continue to push the boundaries of what we can achieve with AI, we must be aware about the fact that it is an extremely powerful tool. By addressing issues such as bias, privacy, autonomy, and governance in the mainstream, we can ensure that AI development as well as integration proceeds with fairness, transparency, and ethics.

This is just the beginning, in spite of the initial pushback more and more professions are trying to automate as well as restructure their jobs. Robust ethical frameworks and global cooperation along with regulations are rapidly improved to keep pace with AI.

Stay tuned to our blog for more insights and discussions on the emerging technologies and the future of conceptualization.

Vaishnavi Bojja is currently pursuing a Master’s degree in Computer Science at Blekinge Institute of Technology and freelancing as an author at Tech Concept Lab

Go back to:

Application - Tech Concept Lab

Please respond in English. Parts of the TCL crew are non-Swedish speakers.