Technology industry leaders and policy groups say a move this week by the U.K.’s data-privacy watchdog to levy a $10 million fine on facial-recognition company Clearview AI Inc. sets clearer ground rules for balancing software innovation with people’s right to privacy.

In its ruling the regulator alleged the company collected images of people without their consent. Experts say the action is more likely to spur innovation than hamper it.

“Clearview AI was operating well outside the bounds of what many AI practitioners are comfortable doing,” said Jeremy Howard, co-founder of Fast.ai, an online service that provides resources for AI developers and researchers. “Knowing that such a use of personal imagery is being penalized is encouraging to those of us that want to build useful tools in an ethical way,” he said.

Eric Schmidt, former chief executive of Alphabet Inc.’s Google and chair of the federal National Security Commission on Artificial Intelligence, said within the AI market, facial recognition is a special case of a technology that he expects to be “super regulated.”

Many key benefits of AI-enabled systems, including software tools designed to speed up disease detection and diagnosis, require massive amounts of personal data, Mr. Schmidt said. Beyond facial images and biometric data, he said, “we need to agree on what other information should be so restricted,” while offering individuals a chance to opt out.

Clearview AI, a New York-based startup, has amassed billions of facial images and personal information from Facebook, LinkedIn and other websites, which it uses to train facial-recognition software to identify individuals based on face scans.

The U.K.’s Information Commissioner’s Office on Monday fined Clearview AI more than £7.5 million, saying an investigation had determined the company collected more than 20 billion images of people without seeking their approval.

Though the company no longer offers facial-recognition services to U.K.-based organizations, the agency said, it has continued to use citizens’ images and personal data. In addition to the fine, the agency ordered Clearview AI to delete the data from its systems.

Other countries that have taken similar regulatory action against Clearview AI include France, Italy and Australia.

Hoan Ton-That, CEO of Clearview AI

Photo: Seth Wenig/Associated Press

Hoan Ton-That,

Clearview AI’s CEO, said the company collects only public data from the internet and complies with “all standards of privacy and law.” He said U.K. regulators are preventing advanced technology from being put to use by law enforcement agencies to help solve “heinous crimes against children, seniors and other victims of unscrupulous acts.”

“Though privacy is an important value to have, balance must be struck regarding the use of data that is already public that can be used to enhance the accuracy of artificial intelligence, namely facial recognition,” Mr. Ton-That said.

Clearview AI has been criticized for providing facial-recognition capabilities to law enforcement agencies, both in the U.S. and Canada—in some cases offering free trials—which critics say can contain algorithmic biases against ethnic minorities and other groups.

Broader commercial applications of facial-recognition technology include store and workplace security, targeted advertising and product recommendations, online payments and other apps and services triggered by facial scans.

Earlier this month, Clearview AI agreed to limit the sale of its image database as part of a legal settlement with the American Civil Liberties Union in the Circuit Court of Cook County in Illinois. The settlement stems from a 2020 lawsuit brought by the ACLU claiming Clearview had violated the Biometric Information Privacy Act by gathering biometric identifiers of Illinois residents without their consent. The state law, enacted in 2008, regulates the collection, use and handling of biometric data by private entities.

The U.S. currently has no specific law governing the technology, with several proposed bills stalled or failing to advance beyond legislative committees.

Dahlia Peterson, a research analyst at Georgetown University’s Center for Security and Emerging Technology, said the U.K. regulator’s move is unlikely to hinder Clearview AI’s use of facial-recognition technology or its ability to expand. “Fines that come after the fact may do little to stop image data exploitation,” Ms. Peterson said.

Strict privacy protections in the U.K. and Europe, she said, have forced technology companies there to be innovative, such as developing automated face-pixelation capabilities for live video surveillance. Efforts can also be taken to improve the accuracy of AI models that use synthetic biometric data rather than images of real people, Ms. Peterson said.

Greater regulatory certainty can catapult innovation by motivating companies to invest in research and development that aligns with the public interest rather than harming it, said David Leslie, director of ethics and responsible innovation research at the Alan Turing Institute, the U.K.’s national research center for data science and artificial intelligence.

Ari Lightman, a professor of digital media and marketing at Carnegie Mellon University’s Heinz College of Information Systems and Public Policy, said clamping down on companies like Clearview AI will likely have an immediate impact on how companies use the data they collect, along with how and where they collect it. “Data gathering is going to have to check the boxes associated with ethical, regulatory and legal precedent or might result in punitive measures later on,” Mr. Lightman said.

Stephen Messer, co-founder and vice chairman of software maker Collective[i], said a heavy-handed approach to facial-recognition regulations in Europe and elsewhere risks chasing advanced developers to “larger, less regulated markets.”

Write to Angus Loten at angus.loten@wsj.com