Page 11 - AI Vol 2: Risks of AI
P. 11

results, whereas some tasks are just too complex
            for  the  current  AI  system  to  provide  effective       WHEN EMPLOYEES RELY
            assistance. Continually monitoring and evaluating         HEAVILY ON AI TOOLS, THEY
            AI performance ensures that the agency can adapt        MAY EXPERIENCE A DECLINE IN
            their use of AI systems (or evaluate whether other      CRITICAL SKILLS, PARTICULARLY
            systems are more appropriate). In addition to the       IN AREAS REQUIRING COMPLEX
            general liability risks discussed in this document,   ANALYSIS AND DECISION-MAKING.
            an agency which deploys but fails to appropriately
            monitor an AI system could be found liable under     overreliance. In addition to this situation leading

            a negligence theory.  This liability may arise if    to poor decision making and performance of the
            the AI model generates harmful or inappropriate      public agency, it may also lead to legal liability
            responses, whether directed towards a member of      to the extent that the agency implements flawed,
            the public, a student, or an employee relying on the   biased, or legally non-compliant decisions based
            AI system for decision-making. Such liability could   on AI advice.  It  is  thus  critical  for  agencies  to
            be attributed to the agency's failure to oversee the   implement protocols to guard against these risks.
            deployment adequately, especially if the potential

            risks were foreseeable with proper monitoring.
                                                                 AI system stability and long term sustainability
                                                                 is also important to consider, particularly in the
            Related  issues include  overreliance  and the       context  of overreliance  and  skill  degradation.
            subsequent degradation  of human  skills.            While  still  in the  infancy  of widespread  AI
            Overreliance refers to situations where AI systems   development  and  deployment,  there  are  a  large
            are excessively  depended  upon for decision-        number of companies  attempting  to build and
            making, potentially leading to diminished human      develop businesses  and  AI  products, some of
            engagement  and a failure to critically  assess      which may not remain  viable over time.  Over
            AI outputs. This reliance can result in a lack of    the  Thanksgiving break in 2023, OpenAI saw

            questioning of  AI-generated  results, allowing      its  CEO  fired  and  the  vast  majority  of  its  staff
            unchallenged  errors  or  biases  to  influence      threatened  to quit in protest, creating  concern
            decisions.  Moreover, an  important  concern         among the business built upon or simply relying
            that arises with the overuse of AI is the risk of    on OpenAI’s services. While  that  situation  was
            skill degradation among staff. When employees        ultimately  resolved  without  interruption  of
            rely heavily on  AI tools, they may experience       OpenAI’s services, the episode illustrates  that
            a  decline  in  critical  skills, particularly  in  areas   agencies need to think critically  regarding the

            requiring complex analysis and decision-making.      extent they intend to rely on any one AI service
            This degradation not only diminishes individual      and have backup plans in the event a service is
            capabilities but also impacts the overall resilience   no longer available. Boardroom conflicts are just
            and adaptability of the agency. These issues are     one of the potential events that could lead to the
            very related,  as overreliance  can lead to skill    discontinuation  of  AI services.  AI models are
            degradation,  and skill degradation  can lead  to
                                                                 trained on vast amounts of information primarily





     RISKS OF AI  |  LOZANOSMITH.COM                                                                       VOLUME  2    |   11
   6   7   8   9   10   11   12   13   14   15   16