This is a commentary, and comments are welcome by email to: info@eaa.co.ke
LEADING THE CONVERSATION ON AI AND CORPORATE ACCOUNTABILITY
In March 2025, Bowmans Kenya hosted its third Annual Corporate Investigations Seminar, bringing together legal experts, regulators, corporate leaders, and technologists to examine one of the most transformative forces in today’s business environment: Artificial Intelligence (AI). The commentary in this issue of our newsletter has been kindly provided by Bowmans following the Seminar.
This year’s seminar, themed “Investigating with Intelligence: The Role of AI in Corporate Investigations” was driven by a clear set of objectives: to explore the legal, ethical, and operational implications of AI in corporate investigations and to promote its responsible adoption across sectors.
As AI becomes increasingly embedded in compliance and risk functions, understanding how to harness its potential, without compromising fairness, privacy, or accountability, has never been more critical.
Key insights highlighted were:
- The necessity to “Africanise” AI to mitigate data bias against African populations.
- The growing need for human oversight in AI-assisted investigations.
- Legal, ethical, and regulatory issues concerning the admissibility of AI-generated evidence.
- The importance of cross-functional collaboration and sector-specific regulations in effective AI integration.
- A live demonstration showcasing AI tools in corporate investigation scenarios.
The full report below delves into these themes more comprehensively, and the growing need for leadership, innovation, and ethical clarity as AI reshapes the investigative landscape in Africa and beyond.
Risk Management in the age of AI – investigations and workplace safeguards
AI is rapidly reshaping how organisations approach corporate investigations. While its capabilities in data analysis and fraud detection are undeniable, integrating AI into investigative workflows raises important challenges around ethics, legal defensibility, and technological readiness. One of the most critical concerns is the inherent bias in AI systems, which stems from the data used to train them, often lacking regional specificity or fairness, particularly in African contexts.
A significant hurdle for many organisations is the infrastructure required to host AI models securely. Most companies lack the internal capacity to deploy advanced AI tools, often relying on third-party platforms that introduce data privacy and governance risks. Innovations like Deepseek are helping address this gap, but wide adoption remains limited. In sectors like banking, however, AI is already proving valuable in detecting suspicious patterns and preventing financial misconduct.
Cybersecurity is another pressing concern. AI systems process large volumes of user data based on prompts and are therefore vulnerable to data poisoning, manipulation, and unauthorised access. Platforms such as SOAR (Security Orchestration, Automation, and Response) help mitigate these risks by automating security workflows and coordinating incident responses, enhancing both speed and accuracy during investigations.
Organisations must also consider internal coordination. Effective investigations often require seamless collaboration across departments, including compliance, legal, HR, and IT. This can be strengthened through formal policies, internal regulations, and cross-training teams on investigative protocols. The use of augmented AI tools, which visually map out investigation scenarios, was also highlighted as a valuable method for improving foresight and communication.
Legal and ethical considerations remain central. Companies were advised against practices such as seizing employees’ devices during investigations, due to privacy and legal risks. Key challenges identified include the use of inaccurate or incomplete data, the inadmissibility of AI-generated evidence in court, and employee privacy concerns. These require thoughtful mitigation strategies to avoid undermining the integrity of investigations.
To address these risks, organisations should train their teams on the proper use of AI, conduct thorough risk assessments before adopting tools, and obtain informed consent from individuals whose data may be processed. AI should be used as a support mechanism rather than a primary evidence source. Furthermore, third-party agreements with AI vendors should clearly define data access, usage, and storage terms. Ultimately, responsible integration of AI depends not only on the tools themselves but on the policies, people, and principles guiding their use.
Investigating with intelligence – role of AI in Modern Investigative Practice
Corporate investigations are increasingly dealing with sensitive issues such as sexual harassment, fraud, nepotism, and misconduct involving third-party service providers. These challenges demand investigative mechanisms that are both rigorous and legally sound. A key point of emphasis is the need for clearly defined legal frameworks to guide inquiries involving external parties, ensuring that all processes are compliant, accountable, and defensible.
AI tools are emerging as valuable assets in this space, offering efficiency and analytical capabilities that can enhance investigative outcomes. Demonstrations at the seminar showcased how AI platforms can identify patterns and anomalies in large datasets, improving both the speed and accuracy of investigations. However, it was emphasised that these tools must be deployed with careful human oversight to mitigate risks of algorithmic bias and ensure contextual understanding. Human intervention remains essential for interpreting results, maintaining ethical standards, and providing legal defensibility.
The importance of transparency in AI decision-making was also underlined. Any AI system used in investigations must produce explainable outcomes and withstand legal scrutiny. Attendees were reminded that unclear or opaque AI outputs can undermine the credibility of an investigation. Additionally, organisations must navigate the delicate balance between employee privacy and investigative needs, particularly when assessing data on personal versus company-owned devices. They should establish clear internal policies to govern such practices.
“AI is no longer the future – it is now. The successful integration of AI into corporate investigations demands nuanced thinking, robust regulation, and ethical leadership”. Dr Bright Gameli, CEO, Cyber Guard Africa
Finally, in cross-border investigations, legal and regulatory inconsistencies further complicate the use of AI. With varying privacy laws and evidentiary standards across jurisdictions, a one-size-fits-all approach is insufficient. Companies must adapt their investigative practices accordingly and remain mindful of AI’s potential to impact workplace culture negatively. Poorly configured systems may appear intrusive or discriminatory if not adequately trained. Ethical deployment, coupled with robust training and governance, is crucial to ensuring AI supports, rather than undermines, investigative integrity.
Data, AI, and the future of enforcement –challenges and opportunities
As AI becomes more entrenched in decision-making processes across sectors, a significant concern is the under-representation of African data in training datasets. This gap has resulted in biased outputs that fail to reflect the realities of African users and contexts. The call to “Africanise” AI was emphasised as not only a matter of fairness, but a necessary step to ensure that the tools used by African institutions are accurate, inclusive, and locally relevant.
Alongside these representation challenges, the legal and regulatory framework for AI governance was reviewed, with particular reference to Kenya’s Data Protection Act and the Computer Misuse and Cybercrimes Act. These laws provide an essential foundation, but organisations must go further by equipping themselves with the internal expertise needed to navigate complex risks. It was noted that purchasing AI tools is not enough; real security lies in understanding and managing them, especially in light of the growing inevitability of cyber breaches.
“AI systems trained on foreign data cannot be expected to solve African problems. We must feed our realities into these tools; only then will AI become relevant, inclusive, and just”. Oscar Otieno, Deputy Data Commissioner, Office of the Data Protection Commissioner
Legal admissibility also surfaced as a critical concern. Evidence generated through AI is often not considered a primary source, raising procedural uncertainties in regulatory and judicial environments. This becomes even more pressing as AI adoption expands into institutions like The Judiciary. Kenya’s Judiciary, for example, is developing an AI adoption framework aimed at preserving judicial independence while enhancing access to justice and operational efficiency.
Despite these challenges, AI offers considerable opportunities. It can rapidly analyse vast volumes of data, serve as a proactive risk assessment tool, and significantly improve operational productivity. However, realising these benefits depends on upskilling personnel and developing sector-specific strategies that align with legal obligations and ethical standards. Discussions also highlighted the potential value of creating a unified African AI Charter, drawing on frameworks from the African Union, the East African Community (EAC), and Kenya to shape a Continent-wide approach to ethical and inclusive AI governance.
Balancing Regulation and Innovation – shaping Ethical AI Governance
As AI continues to evolve, so does the urgency for thoughtful, inclusive, and ethical regulation. A key concern raised was the risk of bias and error in AI systems, which can have far-reaching implications for fairness and accountability. Participants emphasised the importance of adopting AI solutions that allow for human intervention, particularly when making decisions that impact individuals or society. For multinational companies, the challenge lies in ensuring compliance across diverse legal jurisdictions, which requires an understanding of both global and local regulatory nuances.
The role of government was identified as critical in striking the right balance between fostering innovation and imposing regulation. Borrowing from mature legal systems can offer a starting point, but excessive regulation may stifle AI development. Instead, governments should build frameworks that enable safe experimentation, promote industry growth, and mitigate risk simultaneously. Notably, while international frameworks such as the EU AI Act, the U.S. AI Bill of Rights, and Africa’s various Continental strategies offer valuable benchmarks, there is concern that many of these policies are being developed hastily, potentially at the expense of quality, depth, and contextual relevance.
A call was made for sector-specific flexibility, allowing industries to tailor AI regulations according to their unique operational risks and needs. This approach empowers sectors such as healthcare, agriculture, and finance to define how AI should evolve within their specific contexts, while still aligning with universal ethical principles. Ethical use of AI, it was agreed, must be grounded in human oversight, vendor accountability, and rigorous impact assessments, all of which support transparency, fairness, and trust.
Finally, the launch of Kenya’s National AI Strategy was welcomed as a landmark step. The strategy aims to position Kenya as a leader in responsible AI innovation by promoting adoption across key sectors, building local technical capacity, and addressing challenges such as infrastructure gaps and digital literacy. It reflects an understanding that AI regulation cannot be static; it must be adaptive, forward-looking, and grounded in both local realities and global best practices.
Conclusion – responsible AI is a collective mandate
“As we lead the conversation on AI in corporate investigations, our responsibility is clear: to shape a future where technology serves justice, protects people, and strengthens accountability”. Terry Mwango, Partner and Host of the 2025 Corporate Investigations Seminar, Bowmans Kenya
The 2025 Corporate Investigations Seminar made one thing abundantly clear: Artificial Intelligence is no longer a future concept; it is a current reality shaping how organisations manage risk, enforce compliance, and uphold accountability. Across all discussions, there was consensus that while AI offers transformative potential, its power must be harnessed thoughtfully, ethically, and with strong human oversight.
Key themes emerged, ranging from the urgent need to “Africanise” AI data to ensure fairness to the importance of building legal frameworks that can withstand judicial scrutiny. The role of human judgment in AI-assisted investigations, the necessity of cross-departmental collaboration, and the challenge of navigating diverse regulatory environments were also emphasised. These insights are not theoretical; they are practical, pressing, and directly relevant to how businesses operate today.
This is a commentary and comments are welcome by email to: info@eaa.co.ke .