Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: pub@towardsai.net
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab VeloxTrend Ultrarix Capital Partners Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Free: 6-day Agentic AI Engineering Email Guide.
Learnings from Towards AI's hands-on work with real clients.
From Compliance to Care
Artificial Intelligence   Latest   Machine Learning

From Compliance to Care

Last Updated on December 29, 2025 by Editorial Team

Author(s): Dr. Vasileios Ioannidis

Originally published on Towards AI.

From Compliance to Care
Global AI Regulations

How Software Becomes Law

Modern HR platforms do more than process payroll or store data. They observe how employees behave, infer intentions and gently push them toward certain choices. Across continents, algorithmic systems are ranking candidates, suggesting promotions and flagging “anomalous” behaviour. In effect, software is beginning to govern the workplace. After decades spent designing HRIS, HRMS and EOR platforms, I have learned that every line of code carries regulatory and psychological consequences. AI in HR is never just a feature; it is a classification of risk, a compliance obligation and a litmus test for our duty of care. If we ignore that, we build efficient machines that quietly undermine agency and trust.

There is a moment many of us recognise: the system offers a suggestion you had not planned, and it feels both helpful and unsettling. “When the machine knows your habits better than your manager, who is truly making the decisions?”This piece is not anti‑technology; it is a call to design technology with conscience and to champion ethics even when they slow us down.

In the pages that follow, I translate the world’s major AI regulations into concrete product requirements. I explain why fairness and transparency must be part of the architecture and why companies need a governance architect who can navigate European and American rules while protecting human dignity. My aim is to position you as the unique expert who can fuse law, ethics and psychology into competitive advantage.

The EU AI Act: High‑Risk Systems and Human Oversight

The EU AI Act is the first law to rank AI systems by the harm they can cause. It treats employment‑related tools — those used to hire, fire, promote or monitor people — as high‑risk systems artificialintelligenceact.eu. Providers of such systems must implement risk management, rigorous data governance and record‑keeping, design for human oversight and cybersecurity, and avoid certain banned practices like manipulation, exploitative biometrics or social scoring artificialintelligenceact.eu. In parallel, the GDPR gives people a right not to be subject to solely automated decisions with significant effects ico.org.uk; decisions about jobs clearly fit this definition ico.org.uk. As a result, product teams must ensure that any AI recommendation affecting someone’s livelihood is reviewed by a human and that individuals can contest or understand these decisions ico.org.uk. Meeting these obligations means building risk assessments, audit logs and transparent interfaces into the core architecture — not as afterthoughts.

These obligations carry teeth: regulators can impose multimillion‑euro fines, and implementation deadlines begin in 2025. They force vendors to shift from “move fast and break things” to “design carefully and document everything.” As product owners, our design choices become legal instruments — a fact that transforms compliance into competitive advantage.

Ethical Frameworks: From Principles to Product Requirements

Beyond law, Europe and the wider world have issued ethics guidelines that underpin responsible AI. The European Commission’s Trustworthy AI guidelines and the OECD AI Principles call for human agency, safety, privacy, transparency, fairness and accountability oecd.ai. UNESCO’s recommendation emphasises that human rights and dignity are the cornerstone of any AI deployment unesco.org.

But practically for HR products such as HRIS/HRMS and EOR, these principles become basic design questions such as:
Do we obtain meaningful consent?
Can an employee understand and challenge automated decisions?
Are training datasets representative and tested for bias?
Do we explain who is responsible for the algorithm’s outputs?

I always try to embed these questions into product roadmaps, designing screens that disclose when AI is used and processes that require human sign‑off.

At first glance, phrases like “respect human rights” or “promote inclusive growth” sound abstract — almost ceremonial. Easy to agree with. Easy to ignore. But in practice, these principles live or die inside small, uncomfortable product decisions that shape how people experience power. When a developer pauses to ask whether a user’s behavioural history really needs to be stored to marginally improve a recommendation, the principle of data minimisation is no longer theoretical. It becomes a question of restraint. Of boundaries. Of whether the system is designed to serve the user, or quietly consume them.

When an engineer argues for a black-box model because it delivers higher accuracy, transparency stops being a philosophical ideal and turns into a psychological necessity to the actuall user = the employee. Explainability is not for the person deploying the model, or the team celebrating its performance. It is for the person on the other side of the decision — whose opportunities, confidence, or sense of fairness maybe paused on a logic they are never allowed to see. Systems that cannot explain themselves do not just create opacity; they create anxiety, mistrust, and disengagement.

This is where values either remain slogans or become architecture. When we embed them into features, defaults, and constraints, we do more than comply or signal virtue. We shape how safe people feel inside our systems. We decide whether technology earns trust or quietly erodes it. And in doing so, we build something far more durable than technical performance: reputational capital grounded in psychological credibility.

The UK’s Sectoral Approach and Pro‑Innovation Principles

Unlike the EU, the UK has opted for a lighter framework. Its 2023 policy paper lists five principles — safety and robustness, transparency, fairness, accountability and contestability — to guide existing regulators gov.uk. For HR tools, the fairness principle stands out: AI must not erode rights or embed discrimination. The Information Commissioner’s Office reminds companies that solely automated decisions with legal effects are generally unlawful ico.org.uk. Even under a pro‑innovation banner, human accountability cannot be waived. In practice, I harmonise UK flexibility with EU rigour by adopting the strictest requirements across both regimes, ensuring products remain compliant whatever politics bring.

The UK’s approach reflects its economic priorities: rather than create a new regulator, it trusts sectoral bodies to adapt the five principles to their domains. Innovation can flourish under this flexibility, but it may lead to patchy enforcement. Companies operating in Britain should therefore exceed the minimum; aligning with EU standards mitigates future policy swings.

U.S. Frameworks: Voluntary Standards with Enforcement Signals

The United States offers guidance rather than statute — at least for now! The NIST AI Risk Management Framework divides responsible AI into four functions:
1. governing policies nvlpubs.nist.gov,
2. mapping use cases and stakeholders nvlpubs.nist.gov,
3. measuring performance and bias nvlpubs.nist.gov, and
4. managing risks and incidents nvlpubs.nist.gov.

The framework is voluntary yet widely adopted yet not so much in HRIS/HRMS or EOR sectors as I personally (from a users perspective) would like to see. Enforcement comes through agencies: the EEOC warns that discrimination laws apply even when AI is used eeoc.gov, and the AI Bill of Rights calls for safe systems, protection against algorithmic discrimination, data privacy, notice, and human fallback options snyk.io. These documents shape procurement and highlight that, even without federal legislation, companies must (yes, must!) build fairness(!) and transparency(!) into U.S.‑facing modules.

The U.S. context is fluid. A patchwork of state and federal initiatives underscores why adopting voluntary standards such as NIST and following EEOC guidelines is both prudent and strategic. It demonstrates a company’s commitment to fairness and risk management when engaging U.S. clients and pre‑empts future regulation.

Global Management Standards: ISO 42001 and Beyond

Structured governance is the glue between disparate laws. The ISO/IEC 42001:2023 standard is the first AI management system blueprint, offering a formal process to identify risks, define responsibilities and ensure continuous improvement iso.org. It emphasises ethics, transparency and accountability iso.org. By aligning our development processes with ISO 42001, we can unify EU, UK and U.S. obligations under a single governance umbrella, maintain traceability and schedule regular impact assessments.

Certification also builds trust. When investors or partners see that a platform has an accredited management system, they infer that the company has thought through the complexities of AI. ISO alignment signals maturity in a field where hype often outpaces substance.

Toward an “AI Duty‑of‑Care” Blueprint

To reconcile legal obligations with human wellbeing, we need a Duty‑of‑Care blueprint. Such a blueprint begins by mapping and classifying high‑risk modules under the EU Act and documenting data sources, intended outcomes and potential harms. It embeds human oversight into interfaces so decision‑makers can override or explain algorithmic outputs, meeting GDPR and AI Bill of Rights expectations. It designs for fairness and privacy, regularly testing for bias and minimising data collection. It clarifies accountability through roles, technical documentation and audit trails. Finally, it addresses psychology: research shows that people perceive AI‑only screening as less fair mdpi.com, distrust AI in sensitive decisions and hesitate to share personal data frontiersin.org, and feel impersonalised by algorithmic interviews. Monitoring can cause distress and lower job satisfaction pmc.ncbi.nlm.nih.gov. Therefore, we must deliberately include human interaction, empathy and consent in the design. This blueprint is dynamic, evolving with law and evolving understanding.

The blueprint also educates stakeholders about why certain controls exist and how they protect the company and its people. It fosters cross‑functional collaboration so compliance becomes a commitment to ethical excellence rather than a box‑ticking exercise.

Why You Need a Human+AI Governance Architect

Building responsible HR software requires more than coders and compliance officers. It demands a leader who can translate statutes into design patterns, who understands the emotional landscape of work and who has built HR platforms from the ground up. My decades of designing HRIS and EOR systems and my doctorate in organisational psychiatry allow me to fuse law, psychology and product strategy. I do not add AI for novelty; I build guardrails that predict, prevent and perform. AI in workforce platforms is governance work, not feature work. With my guidance, your product becomes a duty‑of‑care machine rather than a liability.

From Regulation to Reputation

The next generation of HR platforms will be judged not just on their efficiency but on their ethics. Companies that see regulation as a catalyst for trust will outperform those that see it as an obstacle. By aligning with the EU’s high‑risk classification, the UK’s principles, U.S. guidance and global standards, we can build platforms that are lawful, ethical and psychologically attuned. Compliance and care are not enemies of innovation; they are its foundation. Might I suggest the following: If you aim to set the benchmark for responsible HR technology, start with a duty‑of‑care blueprint and engage a human+AI governance architect to realise it — otherwise, it will be set to fail by definition. The story of AI at work is still being written; together, we can make it a story of empowerment, not erosion.

Dr. Vasileios Ioannidis is the founder of HackHR.org and inventor of the Tectonic HR™ methodology. With more than 25 years of experience leading global HR transformations and a doctorate in industrial‑organisational psychiatry, he helps organisations predict, prevent and perform. He writes here as your trusted advisor, strategist and friend.

Tectonic HR™ | Human-Centric Futures | Predict. Prevent. Perform.

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor.

Published via Towards AI


Towards AI Academy

We Build Enterprise-Grade AI. We'll Teach You to Master It Too.

15 engineers. 100,000+ students. Towards AI Academy teaches what actually survives production.

Start free — no commitment:

6-Day Agentic AI Engineering Email Guide — one practical lesson per day

Agents Architecture Cheatsheet — 3 years of architecture decisions in 6 pages

Our courses:

AI Engineering Certification — 90+ lessons from project selection to deployed product. The most comprehensive practical LLM course out there.

Agent Engineering Course — Hands on with production agent architectures, memory, routing, and eval frameworks — built from real enterprise engagements.

AI for Work — Understand, evaluate, and apply AI for complex work tasks.

Note: Article content contains the views of the contributing authors and not Towards AI.