Sunday, July 6, 2025

The AI Challenge for PI and DM professionals

If you are a professional in personal injury and disability management (PI/DM), return-to-work (RTW) coordination or related fields, Artificial Intelligence (AI) brings new challenges. The advantages of using AI are clear: new insights, expedited workflow management, and improved communication leading to better outcomes for clients.



Optimizing client outcomes is fundamental to the profession. Failing to use AI tools—or failing to use them appropriately—run counter to the profession’s objectives.


AI Literacy and Use in PI/DM


In speaking on AI and PI/DM for the last two years, I have encountered apprehension among professionals in this field. At a recent Canadian Society of Professionals in Disability Management (CSDPM) event, I asked the audience to respond to four questions. The live “vote” (n=254) results are revealing:

  • Do you use the internet most days? 98% yes
  • Do you consciously interact with artificial intelligence (AI) most days? 48% yes
  • Does your organization have a well understood policy or guideline for the use of AI? 38%  yes
  •  Are you using AI in your professional DM work? 30% yes

While nearly universal daily internet use, less than half report consciously interacting with AI and less than a third are using (or admitting use) of AI in their professional work.

A lack of AI literacy may account for this low level of engagement.

AI Literacy, Transparency, and Use


AI literacy refers to the ability to critically understand, responsibly use, and effectively evaluate artificial intelligence tools and systems. This is not just technical knowledge; AI literacy requires ethical awareness, contextual judgment, and transparent communication about AI’s role in shaping information, decision-making, and practice.


Transparency is important. From their formal training, most PI/DM professionals are aware of academic integrity rules particularly around plagiarism, but AI use requires a distinct set of rules.


As an instructor, I allow the use of AI tools in my courses and require an AI Attestation with acknowledging the use and extent (if any) of AI tools in formal assignment submissions.   I do not encounter many examples of plagiarism, but I am seeing what I call “plagi-AI-ism,” the use of AI tools to generate or significantly shape content, without proper acknowledgment of the AI’s role.  This lack of transparency runs counter to academic integrity and professionalism.


To be clear, the appropriate and transparent use of AI tools — such as language refinement, citation help, or idea generation — is encouraged and allowed if properly disclosed. Plagi-AI-ism refers only to non-transparent, misleading use of AI that obscures authorship, intellectual effort or critical professional considerations.


A recent KPMG/University of Melbourne study (Gillespie, N., Lockey, S., Ward, T., Macdade, A., & Hassed, G. (2025). Trust, attitudes, and use of artificial intelligence: A global study 2025. The University of Melbourne and KPMG. DOI 10.26188/28822919) found Australia, Canada, and New Zealand have among the lowest levels of training, literacy, and trust in artificial intelligence systems in the world.


AI education and training is reaching about 24% of the population in these countries, lagging slightly behind the US (28%) and well behind China and India (64%), Singapore and Switzerland (45%), and Denmark and Italy (34%) as well as dozens of other countries.

AI adoption in the workplace


The use of AI at work is growing. The KPMG/UofM study notes:

“Three in five (58%) employees intentionally use AI at work on a regular basis, with a third using it weekly. Generative AI tools are most used with many employees opting for free, publicly available tools rather than employer-provided options. Emerging economies are leading in employee adoption with 72% using AI regularly compared to 49% in advanced economies.”


AI use by knowledge workers (including professionals in PIDM, RTW, HR) may well be significantly higher. A study by Microsoft and LinkedIn (Microsoft and LinkedIn, 2024 Work Trend Index Annual Report, May 8, 2024, https://news.microsoft.com/2024/05/08/microsoft-and-linkedin-release-the-2024-work-trend-index-on-the-state-of-ai-at-work/) found:

  •        75% of global knowledge workers use AI at Work—and won’t wait for companies to catch up.
  •        78% of AI users are bringing their own AI tools to work (BYOAI)—it’s even more common at small and medium-sized companies (80%).
  •        52% of people who use AI at work are reluctant to admit to using it for their most important tasks.

AI is everywhere.


Part of the lower-than-expected results from my recent survey may be due to a lack of awareness about the prevalence of AI technologies in the tools used by professional in DM and related fields. To be clear, AI is everywhere professionals; consciously or not, PI/DM professionals are interacting with AI. Ignoring it or banning its use on the job is not an option.


If you use the internet most days—as two-thirds of humanity do—you are interacting with AI. AI is built into most office software.  Search engine results are mediated by AI.  Every time you complete a CAPTCHA, you are advancing AI.  When you hear the call centre message, “Your call may be monitored for training purposes,” there is a good chance your interaction is helping train AI.


AI is pervasive and evolving.  PI/DM professionals must keep pace. AI literacy is central to this task.

Setting rules around AI


Universities have rules about the use of AI and academic integrity is an imperative; yet the KPMG/UofM study found:


  • Most students have used AI inappropriately, contravening rules and guidelines and over-relying on AI.
  • Two-thirds have not been transparent in their AI use, presenting AI-generated content as their own and hiding their use of AI tools.
  • Only half regularly engage critically with AI tools and their output.


While 38% of respondent in my survey said their organization had a well understood policy or guideline for the use of AI, the well understood policy was often a blanket “Don’t use AI.”  For these professionals, many AI chatbots are off limits and certain functions on corporate workstations have administrator restrictions.


Despite controls and policies like this, the KPMG/UoM study found “a third (35%) of employees report that the use of AI tools has resulted in increased compliance and privacy risks, such as contravening rules, policies and local laws.” 


AI enabled apps such as Microsoft 365 with CoPilot are typically approved for PI/DM professionals in corporate settings. In probing their actual use, respondents (often) reluctantly confirm the Microsoft/LinkedIn survey result:  DM professionals are using personal devices to augment approved use.


This is level of use is similar in other professional settings. The Queensland University of Technology study (McDonald, P., Hay, S., Cathcart, A. & Feldman, A. (2024). Apostles, Agnostics and Atheists: Engagement with Generative AI by Australian University Staff. Brisbane: QUT Centre for Decent Work and Industry. https://eprints.qut.edu.au/252079) found 71% of university staff respondents used generative AI for their university work.

  • Academic staff (75%)
  • Professional staff (69%)
  • Sessional staff (62%)
  • Senior staff (81%)

 The Growth of In-house AI Solution


Many insurers, agencies, rehabilitation providers are now developing their own AI solutions. The rationale for such investments includes the primacy of client confidentiality, competitive advantage, improvements in efficiency, and better outcomes for clients.


In conversation with three senior executives working on their own AI solutions, the obstacles are clear. The time, training, and human resource allocation efforts are immense, and the payoffs are often incremental, at least initially, compared to the cost and effort.


Liberty Mutual reports productivity gains of 1.5 hours per week for the 25% of employees now using their LibertyGPT system. Manulife notes the gains in fraud detection, underwriting, and claims support. Several firms are using AI to train new agents and call centre representatives. A few are now rolling out internally developed agentic AI systems to work with and, in some cases, replace humans doing more routine tasks.

Implications


Governance, board,  and C-suite leadership have a significant issue. Most organizations do not have a formal AI strategy. You need one. And do not expect it to be easy to develop, cheap to implement, or finite in terms of consideration. This is not a “one and done” agenda item.


Administrators have a tough task ahead of them. Chances are many of your professionals are already using AI, skirting any formal policy you may have and pushing the boundaries where policy or guidelines are lacking. You have regulatory imperatives and may be facing a talent crush amid other priorities including budgetary constraints.


PI/DM professionals have enduring professional priorities including.

  • The health, well-being, and best interests of the client
  • Maintaining ethical standards and accountability
  • Continuing Mastery of DM’s evolving knowledge base
  • Adapting proactively to changes in environment and evolving technology landscape.


AI technologies are at the nexus of these professional responsibilities. Increasing AI literacy is fundamental to meeting professionalism.

Improving AI Literacy levels


The KPMG/UofM study highlights this fundamental relationship:

“AI literacy is lagging AI adoption yet is critical for responsible and effective use…AI literacy. is associated with greater use, trust, acceptance, and critical engagement, and more realized benefits from AI use including more performance benefits in the workplace.”


In PI/DM, performance benefits in the workplace translate directly to better service deliver and outcomes to clients.


That begs the question:  how does a professional in personal injury and disability management improve AI literacy?


The QuT study asked AI-user respondents what resources they used to learn about AI tools, or AI more generally. The results were revealing. Informal sources including peers, colleagues, family, or friends topped the list (61.3%); Google searches were the course for half the respondents and (46.6%), YouTube videos were a source for a third of respondents (32.1%). Responses listing formal sources for gaining AI literacy ranked much lower:  conferences or seminars (16.9%), workshops (13.2%) and blog posts like this one (12.5%). Only eight of the 2315 (0.4%) responses noted their degree or qualification curriculum as a resource.


If those results do not scare you, consider the misuse potential inherent in untrained use of AI tools. AI tools are not without limitations and risks. Professionals in all fields are responsible for selecting the right tool for the job, understanding the inherent risks associated with that tool, and having mastery of the tool before employing it in the execution of their work. Without formal training and integration of AI literacy into professional development and qualifications, there is the real risk of harm to clients.

Final thoughts for professionals


AI is here and not going away.  PI/DM professionals must evolve with AI technology and improving AI literacy is part of that. 


Learn about AI but not just from YouTube videos.  Engage in courses and sessions that provide formal, authoritative instruction.  Align your use of AI with the values of the profession.  Anticipate how others are using AI in the products or instructions you receive and be conscious of how others will use AI with the work you produce and services you provide.  Think critically.  AI does not replace your professional judgement.  You must be aware of the imitations and weaknesses of the tools you use.  And with any changing field, you need to stay informed. Where possible, be part of the processes setting standards and guidance for the use of AI in your profession.


To paraphrase a recent IBM study, AI won’t replace PI/DM professionals but PI/DM professional who use AI will replace those who don’t. [IBM Institute for Business Value | Research Insights, 2023, “Augmented work for an automated, AI-driven world”]


No comments: