The Gist of AI Policies

AI offers transformative potential across sectors like healthcare, education, and manufacturing, but its adoption raises challenges like privacy breaches and bias. To address these, robust internal policies are essential for responsible AI development and usage.

05/08/2024 Emil Holmegaard, PhD | Management Consultant at 7N

AI has transformative powers across industries with massive potential, especially within healthcare, education, manufacturing, and even white-collar jobs. But using AI also presents significant challenges, such as privacy violations, bias, ethical dilemmas, and security threats. So, how do you establish clear and effective internal policies and guidelines for developing and using AI systems? 


The Fundamentals of AI Policy

One of the main objectives of AI policies is to ensure that AI aligns with human values and serves the public interest. This issue requires a collaborative and interdisciplinary approach that involves multiple stakeholders, such as governments, researchers, industry, civil society, and users. Here are some of the fundamental principles that should guide policies related to AI: 

Transparency: AI systems should be transparent and explainable for human understanding and scrutiny. 

Accountability: AI systems should be held accountable and subject to appropriate oversight and regulation to address their impact.

Fairness: AI systems must avoid causing harmful or discriminatory effects on individuals or groups and should strive to be fair and inclusive. 

Privacy: AI systems must comply with laws and standards and protect the personal data and the privacy of individuals and organizations. 

Security: AI systems should be secure, robust, and able to withstand malicious attacks or errors. 

AI policies are not static or universally applicable. Instead, they must be updated and adapted to changing contexts and needs in different sectors and regions. It is, therefore, crucial to foster a continuous learning and dialogue culture among all involved in designing and deploying AI systems. 

Regulatory Reactions
Recently, European institutions agreed upon the world’s first comprehensive law on artificial intelligence, the EU AI Act, which is set to enter force in Q2-Q3, 2024, with compliance transition periods from 6 to 36 months. It categorizes AI systems based on risk levels, imposing stricter obligations on high-risk systems like medical devices and recruitment tools, emphasizing transparency, ethical usage, and human oversight. The AI Act has an extraterritorial scope and impacts entities beyond the EU's borders. To avoid penalties, entities should be prepared to comply with these regulations as they come into effect (European Commission, 2024). As the US Congress reviews proposed legislation, several American cities and states have already enacted laws restricting AI use in certain areas, such as police investigations and hiring. Additionally, President Joe Biden has instructed government agencies to thoroughly evaluate any AI products for potential economic or national security risks before implementing them (Bloomberg, 2024).

Considerations: Regulation or Openness?

A key consideration in AI policies is balancing the trade-offs between regulating the AI ecosystem and maintaining openness. AI regulation encompasses laws, policies, guidelines, and standards that govern AI development, deployment, and use. Openness refers to the degree of transparency and accessibility of AI resources, such as data, models, algorithms, and platforms. 

Regulation and openness each have benefits and drawbacks. On one hand, regulation can ensure AI's quality, safety, ethics, and accountability and protect the rights and interests of stakeholders, such as researchers, developers, users, and society. However, regulation can impose constraints and costs on AI innovation and dissemination, creating barriers and conflicts for collaboration and competition among actors such as academia, industry, and government. 

On the other hand, openness can promote AI's creativity, diversity, and efficiency and facilitate the sharing and reusing of AI resources. However, AI can also raise concerns regarding privacy, security, intellectual property, and fairness and expose the technology's limitations and vulnerabilities. 

Therefore, depending on the context and objectives of the research, we need to find a suitable balance between regulation and openness. We must comply with relevant regulations and respect the ethical principles and social values of AI. Still, it is important to embrace the opportunities and benefits of openness while contributing to the advancement and dissemination of AI knowledge and technology. 

Is it necessary to add regulations to ensure transparency and trustworthiness? Or should we refer to examples of openness in how a certain model has been developed to show AI providers and deployers the benefits of a full data trace and governance model? According to the author, the best way to establish transparency and trust in AI is by creating a standard for data trace and governance models. This would encourage all AI model providers to include data traces and governance models in their offerings. 


Concluding Remarks

Although AI is a powerful and promising tool, it has significant challenges and risks. It is crucial to understand the ethics, risks, pitfalls, and opportunities associated with the use of AI for data collection, analysis, and innovation. Doing so will help ensure that AI models are trustworthy and transparent.

About the Author
Emil Holmegaard, Ph.D. in Software Engineering and over ten years of experience in software development, architecture, and governance of IT projects. He is a software quality and architecture specialist, a management consultant, and a TOGAF certified architect. His passion for analysing and exploring challenges areas between advanced technologies and business allows him to solve technical issues and help businesses be more agile and profitable.