State, federal, EU and UK authorities have only begun to tackle the legal implications of Artificial Intelligence (AI). After dire warnings from AI experts earlier this month, it was made clear that a global response, lead by the US, is needed.
This week the U.S. Senate Majority Leader announced plans to convene a series of Congressional "AI Insight Forums,” starting in the Fall of 2023. The Forums will task top AI experts with briefing Congress on AI impact topics including workforce, national security, privacy, explainability, and “doomsday scenarios.” These are likely to be appointment television for many interested in this issue.
Senior Senator from NY Sen. Chuck Schumer said the forums are meant to shake Congress free from its slow-moving committee process in order to regulate this fast-moving technology.
“There’s such little legislative history on [AI], so a new process is called for,” Schumer said, because taking "the typical path - holding congressional hearings with opening statements and each member asking questions five minutes at a time, often on different issues," is unworkable for the AI challenge. It seemed agreed that lawmakers will need some top tier tech education if they are to tackle the swift writing of sensible guardrails for AI deployment.
Meanwhile, individuals and businesses including legal, healthcare, education, professional services, coding, and any data driven line of industry are experimenting with the early AI tools offered to the public for free in their beta versions.
Broadly speaking, Artificial Intelligence (AI) includes a subset called Machine Learning (ML), which in turn contains a subset called Deep Learning (DL). At their most basic, AI is anything that allows computers to mimic human behavior; ML is a subset of AI that involves computers learning without being directly programmed to do so; and DL is the newest of the defined subsets, a type of ML empowering multilayer computer neural networks to process high level features.
AI was born from and feeds on enormous amounts of data, including words, text, images, and computer code. Everything entered into it is consumed and becomes an inextricable part of the AI neural network. Much the same way you can't "unsee" or "unhear" and experience in life, AI cannot remove or erase data from its learning once it has been entered.
Currently, the policies and privacy notifications of the top Beta tool versions of AI, including ChatGPT, expressly state they cannot and do not make any representations of privacy. Even the commercial versions of these AI tools clearly warn against entering any private data. This is because they consume and learn from everything they get, without exception, even the underlying code in the programs used by the data that is entered. Not only is there no "filter" on any AI input valve, but there are no "stop" or "reverse" controls either.
The legal implications of AI in all its learning forms are therefore very serious and growing. Businesses, including all employers, are required to maintain the privacy and security of the data entrusted to it by employees, vendors, business partners, and customers. Yet use of Chat GPT or any other AI tool in the workplace, by definition, violates these laws for anything that is entered into them.
Therefore it is not advised that anyone in the US workforce enter any data into any AI tool that may include the images, voices, code, names, contact information, or other sensitive or identifying facts from or about anyone including clients, employees, business partners, vendors, or customers.
Areas of most legal concern include:
*Trade secrets and intellectual property, including patents, copyrights, and trademarks
*Misinformation, rumor, unproven allegations, lies, falsehoods, and mistakes
*Safety, reliability, accuracy, precision, explainability, and interpretability
*Privacy rights, security rights, notice, disclosures, right to correct, right to be forgotten
*Data transfers, website and URL Terms of Use and Conditions, and data scraping
Lastly but just as critically, AI at its most basic function makes predictive assumptions that will include all the biases and discrimination that may exist in the data it consumes. For a business to rely on any type of AI learning to supplant human decision making in the employment arena will be fraught with inherently prejudicial outcomes. Therefore, use of AI in any predictive capacity it not yet proven safe. For example, the use of AI the business decisions of the hiring process, HR, disciplinary actions, promotions, bonuses, demotions, and/or employment terminations are not advised.
Congressional AI Insight Forums starting this fall... would task top AI experts with briefing Congress on topics as varied as workforce, national security, privacy, explainability and even “doomsday scenarios."
https://www.politico.com/news/2023/06/21/schumer-launches-new-phase-in-push-for-ai-bill-00102871