The quick evolution of AI technology brings with it a mixture of efficiency and unease, giving us a future filled with potential but also significant challenges. At the heart of this technological evolution lies a pressing question: How do we use the power of AI for good while mitigating the risks that come with it?
Miriam F. Weismann, clinical professor of accounting at FIU Business, delves into the nuanced implications of AI’s rapid development and the collective of regulatory measures in place to protect our values and way of life. Through a lens focused on compassion, ethics, and global cooperation, we can understand how we might balance the pursuit of innovation with our responsibility to ensure a safe, equitable, and ethical future shaped by the transformative power of AI.
Key Takeaways
- Ethics and Risk Management
- Ethical guardrails are missing.
- The ethics of AI remains an underdeveloped field within applied ethics.
- Theft of Intellectual Property/ Copyright Infringement
- What is “fair” use?
- AI is infringing copyrights.
- AI does not require consent or provide compensation for the information acquired.
- LLMs are not continuously stable and change over time.
- The use of Chat GPT for medical knowledge can lead to a mistaken conclusion although sounding authoritative and convincing.
- AI is a Known Legal Risk
- Known risks typically give rise to legal liability.
- Safety and security are a known AI risk.
- Lack of accountability raises concerns about the possible safety consequences of using unverified/ unvalidated AI in clinical settings.
- U.S. legislation around AI control takes a sectoral risk approach to risk assessment.
To learn more about the risks, regulations, and implications of AI watch the full video here.