Saturday, October 19, 2024

Personal Injury & Disability Management Education in the AI Era

Artificial Intelligence(AI) is a reality. It is in our workplaces; it’s integrated into the software we are already using; and it is not going away.

Eighty percent of knowledge workers are already using AI (according to IBM/LinkedIn survey May 2024); everyone with access to the internet-that’s more than 2/3rds of the human population on earth—has access to AI tools. The standard desktop applications we use everyday —Google Workspace, Microsoft Office, Adobe Acrobat, Google Search— are already AI-enabled.


AI leverages scarce resources , empowers people with disabilities, and facilitates access to education and employment. This is the AI reality and it is just the beginning.

Generative AI in Continuing PI&DM Education

As a continuing education instructor in the personal injury and disability management (PI&DM) space, I’ve detected the use of AI tools in on-line discussion posts and formal assignment submissions over the last two years. That’s not really surprising given the universal access to many AI tools and the growing number of organizations with enterprise-level deployments of AI applications or platforms with inherent AI features. This only adds to the urgency for changing my instructional approach.

In my view, PI&DM educators must recognize this AI reality and prepare professionals to understand and use AI technology in both their work and studies.

That’s why I now allow AI tools to be used in the courses I teach.

AI in a PI&DM course: an example

In a recent on-line DM continuing educational course, I permitted (but did not require) the use of AI tools. The one week continuing education course is structured around two discussion exercises and formal assignments. In each exercise, each student selects a case scenario and provides an initial post of 250-400 words. Other participants engage through online discussions by offering posts of 150-250 words that probe issues, add alternatives, or link proposed actions to literature or experiences. The interaction mimics a live classroom discussion, in an asynchronous format.

A Discussion Grading Rubric provides greater marks for greater participation along with allocations for demonstration of knowledge and critical thinking as well as meeting timeliness/length requirements. The discussions are the students’ opportunity to demonstrate their knowledge of course content and ability to apply concepts covered in the course materials.

Prior to the start of the course, every student received an email explaining the permissible (but not required) uses of AI tools. The advisory was replicated in the Required Reading section of the on-line module. A required 8 minute video (summarized in the slides posted here) provided further guidance on acceptable and unacceptable uses of AI tools. It introduced some common issues with AI tools including hallucinations.

An additional 22 minute video on AI was offered in the optional resources section of the module. This video provided basic AI literacy information.

The email, required reading instruction, and required video contained the following instruction regarding the course assignment:

You must provide an “attestation” statement with your assignment submission declaring:
a.      The AI applications were used in preparation of your assignment submission and for what purposes, OR
b.      No AI applications were used in the preparation of this assignment submission. 
Please view the required video for more information [link to 8 minute video].

Results from Discussion Exercises

Seventeen participants were registered for the course. On average, there were about 5 posts on three of five discussion scenarios, about eighty posts in total.

Only two discussions posts contained indicators of inappropriate citation and reference use. In both cases, unverifiable references (likely "AI hallucinations”) were detected resulting in a zero grade for the exercise.

Citations and references support the posted arguments and intended to provide evidence of understanding of course content and appropriate application; posts containing the unverifiable references were deleted.

Participants were advised of the unverifiable references and related posts were deleted to maintain academic integrity and avoid any inadvertent re-use of the posts or arguments purportedly supported by them.

Results from Assignment Submissions

Examining assignment submissions compliant with the attestation requirement revealed the use of inherent AI tools in desktop software by two thirds of the participants. One student noted use of a survey tool’s AI assistant in designing an assessment tool. Other students attested to using AI tools for research, alternative explanations to required readings, draft outlines, and grammar-checking.  

With the plethora of AI tools in the market place, it is not surprising that several AI tools reported in the attestations were new to me. Participants are mostly employed professionals and their available software is often a reflection of employer-determined deployments rather than personal choice.

Overall, submissions attesting to appropriate use of AI tools were straightforward to assess against grading criteria established for the course.

Discussion

From this experience, a  policy accepting certain AI uses can enhance assignment quality. Discussion submissions reflected good levels of knowledge demonstration, application of concepts, and timeliness-- similar to past performances in this course.

Declared use of AI tools was also associated with other factors such as high access of required and optional course resources.  This may simply reflect the diligence of these particular participants rather than any impact regarding AI.

Interestingly, in more than a decade of teaching this and similar courses, this was the first time I received no requests for time extensions to discussion or assignment submission deadlines. It is not clear to me why this was the case or if permitted access to AI for uses such as research and alternative explanations played a role in this observed difference from past patterns.

The detection of unverifiable references indicates a more fundamental failure in understanding of academic integrity protocols. Ironically, referencing an AI hallucination reveals the student did not access the cited work--something that might not have been detectable if a verifiable reference had been used. This suggests a need for further training in proper use of references as well as AI literacy around issues such as hallucinations.

Final Thoughts

This instructional experience reinforces the need for increased AI literacy in PI&DM education. This means moving beyond permissive policies to intentional course design integrating AI issues into the curriculum.

The alternatives are not tenable. Banning all use of AI tools in continuing education courses is impractical. AI is intrinsically present in everyday apps and likely in participants' available software environment.  Enforcement of extensive restrictions is difficult and really detracts focus from assessment of mastery. There are “AI detectors” available to instructors, but authentic, well-written original work fed into these will often result in  scores in the “likely AI-generate” range of the scale. (And, of course, such detectors may be defeated by asking  an AI app to “humanize” work to score lower on the “likely AI-generated” scale).

In a work world where AI tools are increasingly intrinsic, AI literacy and understanding of academic integrity must be priorities. Permitting use of AI tools is merely accepting reality. That’s easy. The hard part is incorporating AI technologies into the curriculum and improving AI literacy among PI&DM professionals… including instructors.