This might be the most important job in AI


The launch of ChatGPT marked the dawn of a new era in the corporate world. Generative AI, the technology behind it, swiftly proved capable of writing emails, producing code, and creating graphics within minutes. The traditional workday, filled with inbox management and meticulous presentation crafting, quickly became outdated. Tempted by potential profits and efficiency gains, companies eagerly adopted this technology. According to a McKinsey & Company survey from May, 65% of the over 1,300 surveyed companies reported regular use of generative AI—twice the number from the previous year. 


However, significant risks accompany this rapid adoption. Generative AI can hallucinate, disseminate misinformation, and reinforce biases against marginalized groups if not adequately managed. The reliance on vast amounts of sensitive data also increases the risk of data breaches. More alarmingly, as technology advances, it might diverge from human values. Given this immense power, companies must ensure responsible regulation of their generative AI technologies, and this is where a chief ethics officer plays a crucial role.


**A Critical Role in the Age of AI**


While the specifics of the job vary across companies, a chief ethics officer generally assesses the societal impact of a company's AI use. Var Shankar, chief AI and privacy officer at Enzai, explains that the role involves considering how AI affects not just the company’s bottom line but also its customers, broader societal groups, and the environment. This entails developing programs to standardize and scale these considerations every time AI is employed. 


This role opens pathways for policy enthusiasts and philosophy graduates, as well as programming experts, into the evolving tech industry, often with substantial compensation in the mid-six figures. Yet, companies are slow to fill these positions, according to Steve Mills, chief AI ethics officer at Boston Consulting Group. He notes a gap between discussions of risk and principles and actual implementation within organizations.


**A C-Suite Level Responsibility**


Mills outlines four essential expertise areas for successful candidates: a technical understanding of generative AI, experience in product development and deployment, knowledge of AI regulations, and significant organizational decision-making experience. He often observes mid-level managers being assigned these responsibilities despite lacking the authority to effectively drive change within the organization. Every Fortune 500 company using AI extensively should appoint an executive to oversee a responsible AI program, he suggests.


Shankar, a trained lawyer, emphasizes that specific educational backgrounds are not necessary. What’s crucial is an understanding of a company's data, including its ethical implications, origins, and consent associated with its collection and use. He cites a study published in Science, where healthcare algorithms unfairly prioritized healthier white patients over sicker black patients, as an example of how an ethics officer can prevent such biases.


**Collaborating Across Companies and Industries**


Effective communication with various stakeholders is essential for those in this role. Christina Montgomery, IBM's vice president, chief privacy and trust officer, and chair of its AI Ethics Board, describes a typical day filled with client meetings and speaking engagements. She participates in boards like the International Association of Privacy Professionals and collaborates with government leaders and other chief ethics officers to shape the future of AI ethics.


Montgomery stresses the importance of regular inter-company conversations to share best practices, aiming to develop a comprehensive understanding of societal trends. Her concern is the lack of global interoperability in AI regulations, which complicates compliance for companies. Therefore, ongoing dialogue between companies, governments, and boards is crucial.

Post a Comment

Previous Post Next Post