Page 10 - AI Vol 2: Risks of AI
P. 10

offering thorough training to employees is essential to
            prevent accidental misuse and any liability associated              AS AI BECOMES THE NEW
            with that use.                                                    INFRASTRUCTURE, FLOWING
                                                                                INVISIBLY THROUGH OUR
            It  is  also  important  for  agencies  to  be  informed  about   DAILY LIVES LIKE THE WATER
            the variety of AI systems to select the best system for            IN OUR FAUCETS, WE MUST

            the agency’s needs. For example, open-source models              UNDERSTAND ITS SHORT- AND
                                                                               LONG-TERM EFFECTS AND
            deployed on an agency’s own servers may provide                    KNOW THAT IT IS SAFE FOR
            certain data privacy benefits and be more appropriate to                     ALL TO USE.
            the extent the agency expects these systems to work with
            sensitive information, though currently available open-                    - KATE CRAWFORD
            source models perform at a lower level than the major
            commercial models. It depends on the service, however,

            most paid AI services allow for users to opt out of having
            their prompts used for training purposes. Monitoring
            and controlling employee use of  AI is possible when
            the employee is using employer provided computer
            equipment or accounts; otherwise it would prove difficult
            and an employer would need to rely on any employee
            policy that is in place. If the model will be deployed in
            a public or student facing way, it is important to consider
            what protections are available to ensure the model does

            not produce inappropriate, offensive, or harmful outputs.
            Does the agency have mechanisms in place to alert them
            of any inappropriate use of the AI or to notify them if
            the model generates a prohibited output? Is the agency
            able to test the models for potential biases and are there
            mechanisms to correct such biases? These are just some
            of the considerations the agency should be engaging in

            when evaluating AI deployments.


            After models are deployed, it is critical that the agency
            continually  evaluate  the model’s performance  for
            accuracy and effectiveness. The agency may find that
            their LLM deployment excels in certain tasks, saving
            the agency staff time and resources, while it provides
            poor or inaccurate  results  in  other  tasks.  The  agency
            may find that certain prompting strategies produce better






     10   |    VOLUME  2                                                                RISKS OF AI  |  LOZANOSMITH.COM
   5   6   7   8   9   10   11   12   13   14   15