Page 8 - AI Vol 2: Risks of AI
P. 8

users is low, there is still some risk of exposing
            sensitive information.  Accordingly, public
            agencies  should  avoid  including  confidential
            information,  particularly  personally  identifiable
            information, in LLM inputs.



            By default,  most publicly  available  models
            utilize user interactions with LLMs for training,
            potentially  incorporating  user prompts without
            explicit  consent for training purposes. Some
            LLM providers provide the ability  to “opt-out”
            of prompts being used for training purposes or
            offer specific enterprise grade plans with higher    Public agencies should ensure the AI systems they

            security  guarantees,  including  that  user data  is   utilize provide sufficient data-protection, such as
            never used for training purposes.                    refraining from training on user prompts, prior to
                                                                 inputting sensitive information. This precaution
                                                                 extends to software incorporating  AI features,
            Even if the agency or individual user has “opted
            out”  of  having  their  data  used  for  training,  the   as user data might be directed to third-party AI
            same privacy  concerns that  an agency  would        models for processing and response. If the agency
            consider with any other software or cloud service    cannot be sure of the data-privacy  protections
            would still apply. Agencies should still evaluate    of  the  AI  system,  sensitive  and  confidential
            LLM services based on their security practices,      information  should not be entered into the

            compliance with applicable data privacy laws, and    models,  particularly  personally  identifiable
            ability to prevent and respond to data breaches.     information (names, addresses, identifiers, etc.).
                                                                 This is especially true for school districts related
                                                                 to student data.
            LLMs which run exclusively  on the  user or
            agency’s own hardware, generally  do not have
            these  same  kinds of privacy  concerns.  As            E X AMP LE
            discussed in  Volume  1, there  is an increasing

            variety of LLM models that can run entirely on               Utilizing AI to draft individualized
            an agency’s hardware, or even individual user’s               education program (“IEP”) goals
            personal computers. As all of the computing and             may result in a Family Educational
            inference is done on the local hardware, no user             Rights and Privacy Act (“FERPA”)
            prompts or data  needs to be transmitted  to the
            developer’s servers. Thus, the developer does not            violation if a student’s personally
            have the opportunity to train on the user data nor               identifiable information is
            do they store user data which could be vulnerable                disclosed to an AI system.
            to a data breach.






     8   |    VOLUME  2                                                                 RISKS OF AI  |  LOZANOSMITH.COM
   3   4   5   6   7   8   9   10   11   12   13