Page 16 - AI Vol 2: Risks of AI
P. 16

expect  they will  increasingly  be under scrutiny   learning and AI generally. Accordingly, this may
            by individuals and groups leveraging AI for their    be an area of concern that comes to the forefront
            own purposes.                                        relatively rapidly. It is also important to note that
                                                                 this  issue  does  not  just  affect  public  agencies
            Another likely risk category of AI proliferation is   and their systems directly but will also affect the
            the increasing danger of cyber threats from bad      vendors agencies rely on for various services. It

            actors. Current LLM systems are very capable         is not only essential for public agencies to assess
            of computer programing and could be utilized         their  own systems for potential  vulnerabilities,
            to develop malicious code used to target public      but  also  to  ensure  the  vendors they  work with
            agency data infrastructure. This means that even     are aware of and preparing for these future
            relatively  unsophisticated  bad actors would be     threats  so as to ensure agency  operations  are
            able to deploy viruses, malware,  ransomware,        not compromised by vendor downtime and that
            and other vectors of attack with relative  ease.     agency data is not exposed through a breach of

            More importantly, the same machine  learning         the vendor’s systems.
            principles that make LLMs and AI so effective,
            could be leveraged to increase the effectiveness     Many of these challenges are not new. Agencies
            of attacks. Imagine a virus that is able to learn    are  familiar  with  unscrupulous vendors using
            from how its targets are able to neutralize prior    one-sided  and sometimes  deceptive  contract
            attacks  and  autonomously  adjust  its  code  and   terms to gain advantage. Agencies are certainly
            methods to prevent  that  neutralization  strategy   familiar with CPRA requests and how they can
            in the future.  This virus could also initially      sometimes  be  “weaponized.”  Cyber  threats,
            spend time  on the  victim’s system probing for      particularly ransomware attacks in recent years,

            vulnerabilities  and deploying an attack strategy    have been at the forefront of public agency data
            specifically tailored to the target’s unique system   infrastructure challenges. What will be new is the
            vulnerabilities. Just as AlphaGo was able to learn   speed and complexity of these issues moving into
            from its experience playing the game “Go” and        the future. Bad or even just troublesome actors
            to come up with unique gameplay strategies no        will be able to utilize AI systems to supplement
            human had yet thought of (discussed in Volume        their own abilities, hypercharging the challenges
            1) so could a malicious AI threat vector deploy      agencies  already  face.  Accordingly, even  if  an

            surprising attack methods.                           agency itself decides not to implement AI in its
                                                                 own operations, they will still have to adapt to a
            Importantly,   whereas   corporations,   public      new AI centered reality.
            agencies, and the general public are largely
            engaging in a deliberative and thoughtful process
            around AI deployment, it is very likely that bad       AI IS A TOOL. THE CHOICE ABOUT
            actors looking to leverage these new technologies      HOW IT GETS DEPLOYED IS OURS.
            are  already  actively  developing  new attack                       -OREN ETZIONI
            vectors relying on advances made in machine






     16   |    VOLUME  2                                                                RISKS OF AI  |  LOZANOSMITH.COM
   11   12   13   14   15   16   17   18