The Federal Trade Commission alleges that IntelliVision Technologies made false claims about its facial recognition software, highlighting concerns over bias and accountability in AI.
This week, the Federal Trade Commission (FTC) took action against IntelliVision Technologies Corp., a company that develops facial recognition software powered by artificial intelligence (AI). The FTC issued a proposed consent order alleging that IntelliVision made false and misleading claims about the effectiveness and impartiality of its technology, particularly regarding gender and racial bias.
The allegations state that IntelliVision presented its facial recognition software as being free from bias, asserting that it had been developed with consideration for various genders, ethnicities, and skin tones. However, the proposed consent order from the FTC indicates that IntelliVision did not have adequate evidence to support its claims. According to the FTC’s complaint, the company asserted that its software achieved “one of the highest accuracy rates on the market,” alongside a claim of operating with zero gender or racial bias, both of which were unsubstantiated.
Furthermore, the complaint highlights discrepancies in the training data used for the AI system. IntelliVision claimed to have trained its facial recognition technology on millions of images; however, the FTC contends that the actual training involved approximately 100,000 unique individual images, to which variations were applied.
The FTC also pointed out that IntelliVision lacked supportive evidence for its assertion regarding the anti-spoofing technology of its software. The company claimed that its system could not be “tricked” by photographs or videos, a statement for which no substantiation was provided.
Samuel Levine, the Director of the FTC’s Bureau of Consumer Protection, emphasised the need for accountability in the claims made by tech companies, stating, “Companies shouldn’t be touting bias-free artificial intelligence systems unless they can back those claims up. Those who develop and use AI systems are not exempt from basic deceptive advertising principles.”
This action against IntelliVision marks just the second instance in which the FTC has alleged misrepresentation associated with AI facial recognition technology. Earlier in December 2023, the FTC entered into a consent order with Rite Aid for failing to implement reasonable procedures in its use of AI facial recognition systems in stores, highlighting the growing scrutiny over the deployment of AI technologies in business practices.
The developments surrounding IntelliVision and the FTC underscore ongoing concerns regarding transparency and accountability in AI automation, especially as businesses increasingly integrate such technologies into their operations.
Source: Noah Wire Services
- https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212 – This article supports the claim that facial recognition systems can exhibit gender and skin-type biases, which contradicts IntelliVision’s claims of bias-free technology.
- https://www.itpro.com/security/privacy/356882/the-pros-and-cons-of-facial-recognition-technology – This article discusses the imperfections and biases in facial recognition technology, particularly affecting women and people of color, which aligns with the FTC’s allegations against IntelliVision.
- https://www.itpro.com/security/privacy/356882/the-pros-and-cons-of-facial-recognition-technology – This source highlights the lack of transparency and accountability in the deployment of facial recognition technology, supporting the FTC’s concerns about unsubstantiated claims.
- https://mobidev.biz/blog/improve-ai-facial-recognition-accuracy-with-machine-deep-learning – While this article discusses the potential accuracy of AI facial recognition, it also implies that achieving high accuracy is complex and may not be universally consistent, supporting the skepticism around IntelliVision’s claims.
- https://www.uschamber.com/assets/archived/images/ctec_facial_recognition_policy_principles_002.pdf – This document emphasizes the need for common, nationwide standards and testing to ensure the accuracy and trustworthiness of facial recognition technology, which is relevant to the FTC’s call for accountability in AI claims.
- https://www.uschamber.com/assets/archived/images/ctec_facial_recognition_policy_principles_002.pdf – The document supports the importance of standardized testing and benchmarking, which IntelliVision allegedly failed to provide for its claims.
- https://news.mit.edu/2018/study-finds-gender-skin-type-bias-artificial-intelligence-systems-0212 – This study details the significant error rates in gender classification for darker-skinned women, underscoring the potential for bias in facial recognition systems.
- https://www.itpro.com/security/privacy/356882/the-pros-and-cons-of-facial-recognition-technology – The article mentions the high error rates in facial recognition technology used by law enforcement, such as the UK’s Metropolitan Police Service, which supports the notion that such systems can be highly inaccurate.
- https://mobidev.biz/blog/improve-ai-facial-recognition-accuracy-with-machine-deep-learning – This source discusses the importance of robust training data for AI facial recognition, which is relevant to the FTC’s allegations about IntelliVision’s insufficient training data.
- https://www.uschamber.com/assets/archived/images/ctec_facial_recognition_policy_principles_002.pdf – The document advocates for risk-based performance standards and transparency, aligning with the FTC’s emphasis on accountability and evidence-based claims in AI technology.


