Agentic AI and the Future of Higher Education
Artificial intelligence has already secured a foothold in higher education, but institutions are now entering a more consequential phase of adoption. According to Ashraf Davids, Solutions Manager for Digital Business Solutions at Datacentrix, a leading hybrid ICT systems integrator and digital transformation partner, the conversation is rapidly shifting from experimentation with generative AI to the operational impact of agentic AI; a move that could very much reshape the way in which universities function.
“Generative AI has proven its value in enhancing productivity and supporting teaching and learning,” says Davids. “But agentic AI is where higher education institutions can see real transformational change, particularly when it comes to administration, compliance and decision making.”
From Content Creation to Autonomous Action
For many education environments, generative AI has become familiar territory. It can be used to summarise documents, assist with lecture preparation and support students with research, as well as responding to prompts on demand. Its value lies in speed, scale and accessibility.
Agentic AI, however, goes beyond assistance into the realm of action. Unlike generative models that wait for input, agentic AI systems can reason, make decisions and execute tasks autonomously across multiple systems.
“An agentic AI doesn’t just answer a question,” Davids explains. “It can complete an end-to-end process, validate inputs, apply rules, trigger approvals, update systems and provide an auditable record of every step.”
This distinction is particularly significant for higher education organisations grappling with complex administrative workloads, growing student numbers and constrained resources.
Admissions, Engagement and Student Success
Davids maintains that one of the most immediate opportunities for agentic AI in education lies in student admissions. While generative AI can summarise applications and supporting documents, agentic AI can take responsibility for the workflow itself. It can validate documentation, flag missing information, prompt applicants automatically, update internal systems and communicate application status in real time, all while logging actions for audit and compliance purposes.
Beyond admissions, agentic AI also enables more proactive student engagement. By integrating with learning management systems and institutional data sources, agentic AI can identify at-risk students, trigger alerts, schedule interventions and escalate issues to academic or support staff when necessary.
“In this model, AI becomes more of an operational partner,” he continues. “It supports staff by removing administrative friction, while still ensuring that humans remain in control of decisions that require judgement or empathy.”
Compliance as the Foundation, not an Afterthought
While enthusiasm for AI adoption is growing, Davids stresses that compliance and governance must underpin every deployment, particularly in regulated environments such as education. With a significant portion of employees already using public AI tools, institutions face increasing exposure to data leakage, intellectual property loss and regulatory risk.
“A recent global survey by the University of Melbourne and KPMG stated that emerging economies are leading in employee adoption, with 72% using AI regularly compared to 49% in advanced economies. However, fewer than half of businesses currently have an AI governance policy in place.
“Once sensitive data leaves your environment, you lose control,” he warns. “This is why governance, role-based access and auditability are non-negotiable. We’ve seen a number of issues globally arising around unsanctioned tools, audit trails, user tracking and control. For instance, the French competition authority, Autorité de la concurrence, fined Google more than $271 million due to copyright issues relating to large language models (LLMs) and lack of governance. And Italy’s data watchdog penalised an AI chatbot developer, charging the company more than €5.6 million. These are significant costs and emphasise what can happen when compliance isn’t taken seriously.
“In South Africa, legislation such as POPIA is already shaping how organisations handle data, and these global enforcement actions signal that regulators are paying close attention to AI governance. Agentic AI systems, by their nature, amplify both the benefits and the risks of automation, making compliance-by-design essential.
“Questions to ask should include the following: Which licences are in use? Where are chats being stored? How compliant are they with legislation and institutional policies? Are you able to monitor the data that goes into AI tools? Can you track who is doing what across departments? Is your intellectual property protected? Are you putting your business at risk due to lack of compliance? Are your employees using public or free AI tools? Have you blocked public LLMs from corporate networks?”
Datacentrix’s approach focuses on secure, governed AI environments where interactions are contained, monitored and auditable. “You need to know who is using AI – role-based access is very important – as well as having an understanding of the data that users are accessing and how decisions are being made,” says Davids. “Without this type of visibility, the risk quickly outweighs the reward.”
A New Operating Model for Higher Education
As higher education institutions look to scale operations while improving student experience, agentic AI offers a compelling path forward.
“Generative AI can most definitely help to enhance learning and knowledge work, but it is agentic AI that has the potential to redefine institutional operations, from admissions and administration to student support and compliance management.
“The real shift is not technological, it’s organisational. Agentic AI enables institutions to rethink how their work is done. Those that embed security, governance and purpose into their AI strategies from the outset will be best positioned to realise its full value,” Davids concludes.

