3 Need-to-Know Insights about AI
At Berkshire Hathaway’s recent annual meeting, renowned investor Warren Buffett remarked on the growth potential of AI, stating, “[I]f I was interested in investing in scamming, it’s going to be the growth industry of all time.” Warren Buffett Buffett’s comment underscores the risks associated with utilizing AI that hasn’t undergone thorough safety vetting. For advisors to enhance productivity and competitiveness, it’s crucial to grasp the changing AI landscape. Here are three strategies to achieve this goal: understanding AI safety vetting, staying competitive, and increasing productivity.
1. There is history behind the hype
Generative AI applications are poised to revolutionize financial services, offering enhancements across various sectors, from personalized customer service through chatbots to improved order handling with the introduction of Dynamic MELO, the first AI-powered stock exchange order type. Navigating Generative AI’s Big Bang
While these innovations may seem groundbreaking, it’s essential to recognize that traditional AI applications, such as those powering recommendation engines on platforms like Amazon, have long been integrated into financial operations, including banking, to detect fraudulent activities like money laundering. What are your rights if your bank account is frozen?
Likewise, many of the risk management, compliance, and governance controls used for other technology can be applied to AI. To ensure effective integration of generative AI into financial services, firms must prioritize several key principles:
1. Understanding Priorities: Identify which generative AI applications are essential for their business.
2. Vendor Due Diligence: Implement governance processes to vet and manage technology usage, including third-party vendors.
3. Compliance & Governance Programs: Maintain a focus on safeguarding firm and client assets by monitoring usage disclosures and restrictions, adjusting cybersecurity controls, enhancing data governance, and ensuring adherence to privacy regulations.
2. AI issues are fiduciary issues
Advisors have an obligation to act in the best interests of their clients which extends to use of technology including AI. Whether it’s ensuring accountability for AI-driven errors in order handling or avoiding AI-washing in marketing materials, firms must uphold fiduciary standards. Last year, the SEC proposed a new rule to regulate predictive data analytics, encompassing AI, highlighting the importance of aligning firm practices with evolving standards. While the proposed rule has not been finalized, the SEC has set the tone for standards with its speeches, press releases, and enforcement cases.
Important Resources for Advisors
2. “Isaac Newton to AI” Remarks before the National Press Club
3. SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence
4. Remarks at Program on Corporate Compliance and Enforcement Spring Conference 2024
Implementing AI Responsibly
In short, that guidance covers familiar fiduciary standards that are also relevant to AI adoption. For instance, advisors must prioritize honesty, transparency, and prudent management of assets including data. Additionally, when implementing AI, advisors should manage risks by committing to implementation principles such as to
1. Monitor AI decisions
2. Ensure security
3. Eliminate bias and
4. Align with client interests.
For specific examples of practices and mistakes to avoid, look to enforcement cases and publications from industry bodies seeking to facilitate wider responsible AI adoption. For example, in addition to the SEC AI-washing enforcement cases, advisors should review the DOJ case against Meta to understand what mistakes to avoid when using AI to market services. Justice Department and Meta Platforms Inc. Reach Key Agreement as They Implement Groundbreaking Resolution to Address Discriminatory Delivery of Housing Advertisements Advisors may also want to follow the Responsible AI Institute and the AI Safety Institute at NIST Responsible.ai because understanding evolving AI standards is not just about compliance with requirements as they evolve; it is also an opportunity for productivity gains in an increasingly competitive landscape. The Economic Potential of generative AI: The Next Productivity Frontier; Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety; How to navigate Global Trends in Artificial Intelligence Regulation
During AI vendor due diligence, it’s crucial to assess a potential vendor’s alignment with fiduciary duties and evolving AI standards. Advisors should understand if a vendor has committed to certain standards and whether it has a strategy to keep pace. For example, advisors can ask
1. Do you follow responsible AI standards, such as the National Institute of Standards and Technology’s (NIST’s) AI Risk Framework? (Artificial Intelligence Risk Management Framework (AI RMF 1.0))
2. How do you address risks such as faulty training data and discriminatory practices?
3. Have you incorporated controls for cyber theft, privacy, and consumer rights, addressing FTC concerns?
4. Have you addressed concerns about IP rights? (Department of Commerce Announces New Actions to Implement President Biden’s Executive Order on AI)
5. What are the organizations you monitor for trends and best practices?
3. The AI risk framework will continue to evolve: create a forum
With increasing regulatory and legislative efforts, including the EU AI Act and SEC, FTC and state legislative proposals, compliance officers and product leads must create forums for discussion and approval before implementing new technologies. Artificial Intelligence Act: MEPs adopt landmark law; FTC Authorizes Compulsory Process for AI-related Products and Services; How to navigate global trends in Artificial Intelligence regulation; and Checking in on proposed California privacy and AI legislation
These legislative and regulatory efforts should address ethical concerns, transparency, and accountability, promoting risk management. For instance, certain states have proposed regulations based on “automated decision-making systems” (ADM) that would require notice, opt-out, and redress for harm from mistakes or bias. Creating a new technology or AI forum will allow timely risk discussions about the trade-offs of using AI. Many of the proposed regulations and laws are tethered to a risk management approach where firms are expected to assess and address risks associated with AI by
1. Requiring human monitoring
2. Risk mapping
3. Disclosure of when AI is being used
4. Identification of unacceptable AI risks
Stay Informed
These insights into AI trends highlight the importance of balancing innovation with material risks. As AI continues to evolve, advisors must ensure they stay informed and adapt to AI product trends.