Below is an excerpt of a legal update authored by Robinson+Cole’s Trevor Bradley, Ian Clarke-Fisher, and Stephen Aronson.

On August 20, 2024, in Ryan LLC v. Federal Trade Commission, the U.S. District Court for the Northern District of Texas entered summary judgment in favor of the plaintiffs and set aside the Federal Trade Commission’s (FTC) Final Rule, prohibiting the FTC from enforcing the Final Rule and the Final Rule taking effect nationwide. As succinctly stated by the district court in its decision, “The Court sets aside the Non-Compete Rule. Consequently, the Rule shall not be enforced or otherwise take effect on its effective date of September 4, 2024, or thereafter.” Decision at 2. 

Background

As we reported in January 2024, April 2024, and July 2024, the FTC issued the Final Rule on April 23, 2024, banning nearly all worker non-compete agreements nationwide effective September 4, 2024. 

On July 3, 2024, the Ryan court enjoined the FTC from implementing or enforcing the Final Rule against only the plaintiffs in that case. Importantly, the July 3rd ruling did not restrain the FTC from enforcing the Final Rule against other employers. At that time, the court stated it intended to issue a decision on the ultimate merits of the case by August 30, 2024, a promise it made good on this week. 

To read the legal update, click here.

This post was co-authored by Labor + Employment Group lawyer Madison C. Picard.

Artificial Intelligence (AI) can greatly benefit manufacturers in the workplace. That said, it should be handled with care. States across the country are attempting to regulate the use of AI in various contexts, from political campaigns to social media, and the workplace appears to be next. Colorado recently enacted the first comprehensive AI legislation regarding the development and deployment of AI, the “Colorado AI Act” (the Act), and its impact will reach various workplaces nationwide. Here is what manufacturers should know about the Act:

  • The Act, which takes effect on February 1, 2026, and will become a part of Colorado’s Consumer Protection Act, regulates “developers” and “deployers” of AI. However, manufacturers will likely fall under the latter category.
  • The Act requires deployers to use “reasonable care” to avoid algorithmic discrimination from using “high-risk” AI systems. It defines “high-risk” AI systems as any system that “makes, or is a substantial factor in making, a consequential decision,” meaning “a decision that has a material legal or similarly significant effect,” including decisions related to employment. The Act defines a “substantial factor” as one that “assists in making a consequential decision,” “is capable of altering the outcome of a consequential decision,” and is “generated” by an AI system. Thus, AI systems used in human resources practices, such as screening candidates in the recruiting/hiring process, may be classified as “high-risk.”
  • Manufacturers may be wondering how they can ensure that they use “reasonable care” when using these kinds of high-risk systems. Interestingly, the Act includes a rebuttable presumption that a deployer is using reasonable care if they:
    • Implement a risk-management policy that is “reasonable,” as defined by the Act based on various considerations, and is regularly and systematically reviewed and updated.
    • Complete annual impact assessments for high-risk AI systems. The Act outlines various elements that the risk assessment must include.
    • Provide several notices required by the Act, including a notice to consumers that AI is being used “to make, or be a substantial factor in making, a consequential decision,” which must be provided to the consumer before that consequential decision has been made. For consumers who are adversely affected by an AI system (for example, denied an employment opportunity), notice must be provided that they have “an opportunity to correct any incorrect personal data” used by the AI system and “an opportunity to appeal an adverse consequential decision.” Manufacturers who employ fewer than 50 employees, do not use their own data to train AI systems, deploy the AI systems for their intended purpose, and make impact assessments available to consumers may be exempt from some of these notice requirements.  
    • Disclose to the Attorney General the discovery of algorithmic discrimination that the AI system has caused within 90 days of discovery.
  • There currently is no private right of action under the Act, meaning the attorney general’s office has exclusive enforcement authority. The Act does outline some affirmative defenses that manufacturers may use if facing an enforcement action.

Manufacturers located in any state that are deploying AI as part of human resources practices, including candidate screening, talent management, performance management, and for other purposes, should consider developing or updating AI policies and frameworks to ensure compliance with this law and the other applicable laws, such as the New York City Automated Employment Decision Tools (AEDT) law. Other states are following suit, and the federal government is also focusing on this issue. We expect the number of states passing such laws to continue rising. In fact, the U.S. Department of Labor just recently released guidance on employers’ use of AI in the workplace, which contains suggested AI principles and foreshadows future federal legislation. Thus, if there is a time to prioritize AI governance – it is now!

This post was co-authored by Labor + Employment Group lawyer Jessica Pinto.

With the 2024 election fast approaching, and political news exploding, manufacturers are asking an important question: What is the role of political bobbleheads, pins, stickers, and discussions in the workplace? 

While public employers (i.e., government employers) are generally restricted from infringing upon employees’ free speech rights under the First Amendment of the U.S. Constitution, those same protections do not apply to employees working for a private employer. That being said, there may be protections under state and federal law. 

Under several state laws, employees’ political affiliation or related activity is protected. For example, in California, no employer may make, adopt, or enforce any rule, regulation, or policy that forbids or prevents employees from engaging or participating in politics or becoming candidates for public office, or controls or directs the political activities or affiliations of employees. The law further provides that employers may not coerce or influence employees to adopt, follow, or refrain from adopting or following a particular political action or activity. Similarly, in Colorado, it is unlawful to prevent employees from forming, joining, or belonging to any lawful political party or to coerce employees because of their connection to a lawful political party. New York also mandates that an employer cannot take certain adverse actions or discriminate against employees for their political activities outside of working hours, off the employer’s premises, and without using the employer’s equipment or property if the activities are legal. Moreover, Washington, D.C., also prohibits discrimination based on political affiliation by statute, and other states have passed similar laws in addition to those mentioned. While these protections relate to political affiliation and activity rather than speech at the workplace, they reveal some states’ desire to shield employees from employer action in the realm of politics.

State laws may also tackle political speech in the workplace, including Connecticut, which has generally extended free speech rights to employees of private employers, subject to a few exceptions. Namely, Connecticut law establishes that a private employer cannot discipline or threaten to discipline an employee for exercising free speech rights guaranteed under the federal or state constitutions unless the speech substantially or materially interferes with the employee’s job performance or working relationship between the employee and employer.

Political speech can also be protected under other federal laws in certain circumstances. For instance, the National Labor Relations Act (NLRA) applies to union and nonunion workplaces alike and protects employees’ rights to engage in certain protected activities, regardless of whether they are part of a union. Namely, Section 7 of the NLRA protects an employee’s concerted activity for the purposes of mutual aid or protection, which is construed broadly and must relate to wages, hours, or other working conditions. Therefore, the NLRA could apply to concerted activity involving speech with a political message or a connection to political expression (e.g., fair wages, minimum wage increases, etc.). A recent example includes a National Labor Relations Board (NLRB) decision holding that an employer violated the NLRA when it terminated an employee, who had joined with others and refused to remove letters stating “BLM,” which stood for Black Lives Matter, from their work apron. In that decision, the NLRB indicated the marking was a “logical outgrowth” of prior concerted protests concerning racial discrimination in the workplace and an attempt to bring complaints to the employer’s attention.

Finally, employers (e.g., owners, leadership, etc.) must be careful if they engage in political speech including endorsing certain political candidates or views. This can create challenges in the workplace including animosity and isolation for those who may disagree, and can result in harassment, discrimination, and other issues. In addition, employers’ ability to mandate that employees attend meetings and listen to such speech may be prohibited under some state and even federal laws, depending on the circumstances.

With election season around the corner, political discussions and politics in the workplace are bound to increase. In anticipation, manufacturers should consider:

  • Whether and how to implement a policy addressing speech in the workplace.
  • Reminding employees about cooperation and respect in the workplace.
  • Training managers on how to navigate such conversations or expressions should it arise.
  • Ensuring that the workplace remains an inclusive environment where employees can work cooperatively and efficiently.
  • Consulting with competent legal counsel if issues arise.

This post was authored by Artificial Intelligence Team member Sean Griffin and is also being shared on our Data Privacy + Cybersecurity Insider blog. If you’re interested in getting updates on developments affecting data privacy and security, we invite you to subscribe to the blog.

Artificial Intelligence (AI) can offer manufacturers and other companies necessary assistance during the current workforce shortage. It can help workers answer questions from customers and other workers, fill skill gaps, and even help get your new employees up to speed faster. However, using AI comes with challenges and risks that companies must recognize and address.

For example, AI can produce a compelling and utterly wrong statement – a phenomenon called “hallucination.” If your car’s GPS has ever led you to the wrong location, you have experienced this. Sometimes, this happens because the AI was given bad information, but even AI supplied with good information can hallucinate, to your company’s detriment. And your employees cannot produce good work with bad information any more than an apple tree can produce pears.

Also, many real-world situations can confuse AI. AI can only recognize a pattern it has seen before, and if it encounters something it has not seen before, it can react unpredictably. For example, putting a sticker on a stop sign can flummox an AI system, and it can confidently misidentify images. Misidentifying images in real-world situations can cause problems if organizations employ facial or image recognition technology.

These problems can be managed, however. Through AI governance, companies can mitigate these issues to use AI safely, productively, and effectively. 

For example, AI can only supplement human thought, not replace it. So, appropriate AI usage requires humans to monitor what AI is doing. Your company should no more have AI running without human monitoring than you would follow your GPS’s instructions into a lake. Without appropriate monitoring, your AI can easily start hallucinating and promulgating incorrect information across your organization, or it can perpetuate biases that your company is legally obligated to avoid.

This monitoring will have to take place in the context of written policies and procedures. Just like you would tell your teenager how to drive a car before letting them behind the wheel, you should have written policies in place to inform your employees on the safest, most effective use of AI. These procedures will need buy-in from your organization’s relevant stakeholders and will need to be reviewed by legal counsel knowledgeable about AI. Your organization will have to leverage its culture to ensure that the key personnel know about the plan and can implement it properly.

Also, your company will need to have an AI incident response plan. We tell teenagers what to do if they have an accident, and the same proactive preventative strategy applies to AI. An incident response plan will inform your company how to address problems before they arise rather than forcing you to scramble in real-time to scrap together a suboptimal solution to a foreseeable problem. Should litigation or a government enforcement proceeding follow an AI incident, a written incident response plan can offer welcome guidance and protection.

Like a car, AI can make you more productive and get you to where you’re going faster. Also, like a car, AI can land you in a wreck if you’re not careful. Your company can enjoy the benefits and manage AI’s risks with thoughtful AI governance.

This post was authored by Artificial Intelligence Team member Sean Griffin and is also being shared on our Data Privacy + Cybersecurity Insider blog. If you’re interested in getting updates on developments affecting data privacy and security, we invite you to subscribe to the blog.

Manufacturers and other companies are facing a critical shortage of skilled workers in manufacturing, technology, healthcare, construction, hospitality, and other industries that are outpacing educational institutions’ training ability. As baby boomers retire without sufficient younger workers to replace them, the problem will only worsen. Many companies are spending money on artificial intelligence (AI) to address this issue to compensate for labor shortages.

AI refers to computers that can perform actions that typically require human intelligence. For example, finding your way from Point A to Point B used to require you to use your intelligence to read a map and navigate your path. Now, however, you just tell your car’s GPS where to go, and the AI figures out how to get there, taking into account traffic patterns, speed traps, and tolls. 

Just like AI can direct your driving, it can direct your employees to optimize their productivity.  AI tools can help workers answer questions from customers and other workers. AI can also assume basic tasks that would typically involve employees, such as the use of customer service chatbots to answer basic questions without involving call center employees. In this way, AI can free up employees to tackle more complicated tasks that may require human creativity.

AI can also fill skill gaps. Organizations are using AI to automate detection and response to ransomware and other cyber-attacks. In the healthcare field, AI can help doctors analyze patient data and trajectories. More broadly, AI might be able to notice transferable skills better than humans can; for example, an AI algorithm might notice that your receptionist has developed skills that would make her an exceptional salesperson.

Many manufacturers use AI to scan resumes. AI can review more resumes more quickly than any HR department can. Trained properly, AI can select the best resumes and enable your team to interview higher-quality candidates.

And when your company hires someone, AI can help get your new employees up to speed faster.  AI chatbots can guide new hires through the onboarding process and provide answers to questions in real time. The United Kingdom’s National Health Service is exploring the use of AI to help train new workers.

Of course, all of the foregoing uses have legal and logistical pitfalls. Using AI in a way that complies with the law and fulfills your requirements requires a robust AI governance program, which I will describe in my next post.

Thank you to Jon Schaefer for this post. Jon focuses his practice on environmental compliance counseling, occupational health and safety.

On July 2, 2024, OSHA released the long-awaited Heat Injury and Illness Prevention in Outdoor and Indoor Work Settings proposed rule. If finalized, the rule would require millions of employers to take steps to protect their workers from extreme heat. However, the proposed rule would not apply to “sedentary” or remote workers, emergency-response workers, or employees at indoor job sites where temperatures are kept below 80 degrees Fahrenheit.

Under the proposed rule, employers would be required to identify heat hazards, develop emergency response plans related to heat illness, and provide training to employees and supervisors on the signs and symptoms of such illnesses. Employers would also have to establish appropriate rest breaks, provide shade and water, and heat acclimatization for new employees and those employees that have been away from the worksite for more than 14 days.

The final regulation will almost certainly face lawsuits from a variety of entities. Several major industries and trade groups, including many in the construction and manufacturing space, had previously raised concerns about the implement ability of several concepts included in the proposed rule. Such legal challenges are likely to be boosted by the U.S. Supreme Court’s ruling last week eliminating the deference that courts owe to agency rulemaking.

However, before the proposed rule can become a final regulation, it must undergo a public notice and comment period. OSHA encourages the submission of written comments on the rule once it is published in the Federal Register. OSHA also has plans to hold a public hearing after the close of the written comment period. More information will be available on how and where to submit comments when the proposed rule is officially published in the Federal Register.

While we wait for the rule to become final, OSHA has made clear that it will continue to hold employers accountable for violations of the General Duty Clause and other regulations implicated by heat-related injuries and illnesses. This includes the continuation of heat-related inspections under OSHA’s National Emphasis Program – Outdoor and Indoor Heat-Related Hazards, which began in 2022.

Employers of all sizes and industries would be impacted from a final regulation on extreme heat. While the proposed rule is not yet binding on employers, it can be helpful to review the rule and evaluate whether your workplace is safe, healthy, and free from recognized hazards that could cause death or serious physical harm – such as exposure to extreme or excessive heat.

The global market for artificial intelligence (AI) in manufacturing is valued at $3.2 billion in 2023 and is poised to grow to $20.8 billion by 2028. Used wisely, AI can help manufacturers solve many of their most intractable issues, including supply chain issues that have frustrated companies and their customers since the pandemic. Yet, many manufacturers remain unaware of how AI can help transform their business or even what exactly AI is.

Basically, AI describes machines performing tasks that would ordinarily require human intelligence: when your phone corrects your spelling, when your favorite streaming service suggests a movie, or when your automatic vacuum cleaner maneuvers around your living room, you are using AI.

In the manufacturing context, AI can enhance supply chain visibility. AI tools can compile and synthesize raw data from invoices, product orders, customs declarations, and other documents to track inventory as goods move through the supply chain. AI can combine this information with historical data to predict sales and revenue, as well as demand fluctuations, enabling a manufacturer to optimize inventory levels.

Like everything else, AI has its pitfalls. The computer maxim “garbage in, garbage out” applies to AI, so AI trained with poor data will return poor results. Your experience with autocorrect has probably taught you that AI can struggle to recognize whether its rules have become inapplicable or whether an exception should be made. AI can discern and repeat a pattern, but automatically repeating patterns can lead to biases that reduce the AI’s efficiency or, depending on how it is used, violate the law. And if not properly governed, AI can be provoked into disclosing private or sensitive information.

Moreover, AI’s impressive potential has attracted government attention. Last year, the White House issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which announced that the federal government would “increase its internal capacity to regulate, govern, and support responsible use of AI.” Following the White House’s lead, the Federal Trade Commission has warned against misuse of AI in various contexts, including uses in ways that it deems harmful to consumers. Similarly, the Securities and Exchange Commission has begun pursuing companies that oversell their AI capabilities. These and other agencies can use their enforcement powers to monitor the development and use of AI systems, and they have signaled their intent to do so. Given AI’s opportunities and risks, a manufacturer must manage its use of AI as thoughtfully as it does every other aspect of its business. An AI governance plan can guide a company toward safe AI usage that works effectively and consistently with its culture while avoiding unwanted government attention. A cyber incident response plan helps mitigate damage if any harm, unauthorized disclosure, or cyber-attack occurs, and it instructs the company to comply with applicable laws after a cyber incident occurs. Together, a strong AI governance plan and a cyber incident response plan can empower a manufacturer to harness the promise of AI while avoiding its perils.

This post was authored by Artificial Intelligence Team member Sean Griffin and is also being shared on our Data Privacy + Cybersecurity Insider blog. If you’re interested in getting updates on developments affecting data privacy and security, we invite you to subscribe to the blog.

Below is an excerpt of a legal update co-authored with my Labor and Employment Group colleagues Stephen Aronson and Christopher Costain.

On May 21, 2024, Connecticut Governor Ned Lamont signed legislation expanding Connecticut’s Paid Sick Leave law beginning January 1, 2025. The new legislation expands the scope of employers covered by the law, increases the number of employees eligible for leave, and broadens the qualifying reasons for paid sick leave, among other substantive changes.

Expansion of Employers Covered By the Paid Sick Leave Law

Currently, the Paid Sick Leave law requires employers with 50 or more employees in Connecticut to provide paid sick leave. The new law expands the employers covered by the law such that, by January 2027, private employers with at least one employee in Connecticut will be required to provide paid sick leave to their employees as follows:

  • Beginning January 1, 2025, employers that employ 25 or more employees in Connecticut will be subject to the law;
  • Beginning January 1, 2026, employers that employ 11 or more employees in Connecticut will be subject to the law; and
  • Beginning January 1, 2027, employers with at least one employee in Connecticut will be subject to the law.

Read more.

Below is an excerpt of a legal update authored by Intellectual Property + Technology Group co-chair John L. Cordani, Jr. and Business Litigation Group lawyer, Janet J. Kljyan.

Intellectual property practitioners were anticipating the Supreme Court’s decision in Warner Chappell Music v. Nealy, which raised important questions regarding the statute of limitations and availability of damages for stale copyright infringement claims. We previously wrote about how the Supreme Court’s decision could impact copyright “trolls:” entrepreneurial plaintiffs who assert copyright infringement claims based on old, allegedly infringing uses of photographs or images on the internet to extract quick settlements from unsuspecting businesses. The Court’s decision, issued earlier this month, may embolden trolls in the short term, especially in the Second Circuit. However, the hope remains that the Supreme Court will rein in the statute of limitations to discourage trolls in a future case.

Warner Chappell Music v. Nealy raised two potential issues: (1) whether the Copyright Act’s three-year statute of limitations begins to run from the plaintiff’s “discovery” of the infringement (called the “discovery” rule), and (2) whether the Copyright Act limits recoverable damages to those incurred within the three years preceding the filing of a lawsuit. Read more.

Below is an excerpt of an article co-authored with my Labor and Employment Group colleague  Jessica Pinto, which was published in the latest edition of PE magazine, the flagship publication of the National Society of Professional Engineers.

On January 10, 2024, the US Department of Labor (DOL) published a final rule revising previous guidance on employee and independent contractor status under the Fair Labor Standards Act (FLSA) – a reminder for employers to ensure proper classification of workers. Misclassification poses serious risks to employers as it may deny workers proper protections and benefits and result in fines, penalties, payments, and other liability. Engineers may be susceptible to misclassification due to the varying nature of work among different sectors, contractual or project-based work, and remote work in different locations. Read the full article.