Rethinking the Value of Flexible Working
How the Right Systems Make It Work for Everyone
A conversation with my brother about working hours...
Mitigate for known, potential and unknown risk before committing to a change. Plan to cancel, remediate or rollback that change before, during or even after implementation to negate identified risks and prevent harm.
In plain English: when you innovate and plan to implement sweeping change, don't just focus on the benefits. Understand the full impact, including potential detrimental consequences, whether direct or indirect. Understand how those risks will be prevented or compensated for. If your change could introduce unacceptable risk or cause harm, a change leader must have the strength to stop, review and potentially rethink the strategy rather than simply ploughing ahead believing the problems can be patched later.
As an innovator and change leader, there is an inherent responsibility to safeguard against risk, follow ethical principles and work toward eliminating harm for the benefit of the many, not just the few.
In recent weeks I have watched 'leaders'* telling the human race to simply get onboard or be left behind as sweeping, global change is coming. Not one of those people has presented a risk assessment, an impact analysis or a plan for mitigation. In any ITSM-governed organisation, a CAB would refuse that implementation on the spot. The change would not proceed.
As I have written before: between innovation and production deployment, there must be an application of compliance, governance and ethics. Pushing change through under the banner of innovation, or more accurately profit, is a short-sighted game that risks consequences far greater than anyone currently modelling the upside is prepared to acknowledge.
This is not an argument against progress. It is an argument for doing progress properly.

Rebellions, as a certain resistance member once observed, are built on hope. So here is mine: principled, credible voices are speaking clearly into this space, and they deserve your attention.
In January 2026, Dario Amodei, CEO and co-founder of Anthropic, published an essay titled "The Adolescence of Technology ". I would encourage anyone reading this article to spend time with it. He argues that AI is in a volatile developmental stage where capabilities are advancing faster than the social, legal and ethical frameworks needed to govern them. He identifies five categories of risk that should concern every working professional:
AI systems developing misaligned goals
Misuse for mass destruction
Misuse by powerful actors to consolidate power
Economic disruption including the mass displacement of white collar workers
Unpredictable indirect societal shifts
His warning about economic disruption is particularly sobering. He cautions that AI could displace half of all entry level white collar jobs within one to five years, with wealth concentration potentially exceeding anything seen since the Gilded Age. For those less familiar with that particular chapter of history, the analogy is closer to home than you might think. Imagine the Capitol and the Districts from The Hunger Games, or Piltover and the Undercity from Arcane: extraordinary wealth and technological power concentrated at the top, whilst those who generate that value live with the consequences at the bottom. That is not dystopian fiction deployed as a warning for dramatic effect. That is history repeating, and Amodei is saying we are actively building the conditions for it again.
Most tellingly, he has publicly stated his deep discomfort with a small number of companies and individuals making decisions about the future of this technology on behalf of everyone else. That kind of honesty, from inside the industry, is rare and it matters.
You can read the essay at darioamodei.com
On the institutional side, the European Commission has produced the world's first legally binding AI governance framework in the form of the EU AI Act. It does precisely what responsible change management demands: it mandates risk assessment before deployment, requires mitigation of systemic risks and is being implemented in carefully planned phases rather than as a single overnight transformation. Rules on general purpose AI models became effective in August 2025. Full applicability for most operators follows in August 2026. For organisations found in breach of the most serious provisions, fines of up to €35 million or 7% of global annual turnover apply. That is not a polite suggestion. That is governance with consequence.
The framework exists. The precedent is being set. The question is whether those driving the fastest horses are paying any attention.
Full details are available at digital-strategy.ec.europa.eu
The EU has demonstrated that robust, ethical AI governance and innovation are not mutually exclusive. The UK government, however, has reportedly delayed meaningful AI legislation in part due to a desire to align with the United States. Given recent geopolitical events and the direction of travel from Washington (legislation and safeguards off!), this feels like an increasingly dangerous foundation to build on. The argument for alignment with the EU framework, rather than with the deregulatory path from across the Atlantic, grows stronger by the week. Clear, robust and ethical AI leadership is needed urgently. The EU has shown it is possible. The UK government should take note, and those of us who live and work here should be making that case loudly.
If you share these concerns, your voice matters. Here is where to use it.
Be a discerning audience first. Question every bold claim. Ask who benefits, who bears the risk, and where the evidence is. If those answers are not forthcoming, treat the announcement accordingly. Your judgement is valid. Trust it.
Engage with policy. Many governments are actively consulting on AI regulation right now. Find your representative and tell them that innovation without governance is not progress, it is gambling with the wellbeing of the human race.
Support the organisations holding the line and hold those that don't to account. Companies and research bodies insisting on ethical frameworks, transparency and safety guardrails need public and professional endorsement. Visibility matters more than most people realise.
Use your professional influence. Whether you work in technology, HR, finance, education or healthcare, the ripple effects of unmanaged AI deployment will reach your sector. Speak up in your professional communities before the change lands on your desk uninvited.
Connect with likeminded professionals. Movements grow from networks. The quiet majority who share these concerns need to find each other and amplify a collective, reasoned voice.

*A note on 'leaders': Leadership in its truest sense carries weight — responsibility for those who follow, and accountability for the consequences. Some of the loudest voices currently shaping this conversation belong to individuals of extraordinary personal wealth, for whom the realistic downside of being wrong is, at worst, inconvenient. When someone insulated from consequence tells the rest of us to trust the direction of travel, it is worth pausing to consider whose journey is actually being planned, and who will be left to deal with the wreckage if the plan goes wrong. Seek wisdom from those who have something to lose alongside you. Use your judgement accordingly.
#AIGovernance #ResponsibleAI #AIPolicy #ChangeManagement #FutureOfWork
A conversation with my brother about working hours...
You cannot go 5 minutes these days without some comment about the generational differences, whether...
Efficiency. Over the last few weeks, efficiency has been wielded as the buzzword for an aggressive...