VIDIZMO Responsible AI Statement

Purpose

At VIDIZMO, we recognize the transformative potential of Artificial Intelligence (AI) to enhance our operations, products, and services.  AI is redefining the way the public and the private sector are addressing some of the most pressing challenges of their customers. These challenges include ensuring regulatory compliance, detecting OSHA violations, and responding to public records requests, and more. 

However, like any other technology, AI has certain harmful implications, that if not addressed, can lead to wrongful decision-making by AI models. Moreover, the increasing use of AI has given rise to data privacy and security concerns. VIDIZMO understands the concerns revolving around the use of AI by enterprises. These concerns include data privacy, bias, accountability, fairness, reliability, safety, inclusiveness, and transparency. 

This Policy aims to address the above concerns. Moreover, this Policy outlines our core Responsible AI principles, which are the foundation of our Responsible AI approach and the guideposts for all development, deployment, and use of AI at VIDIZMO. This Policy supplements the VIDIZMO Privacy Policy

Scope

Definition. There is currently still no universally accepted definition of AI. We have adopted the capability-based definition of an AI system set forth in the proposed EU AI Act, i.e.: 

“A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” 

Subject. This Policy applies to all employees, contractors, partners, and stakeholders involved in the development, deployment, or use of AI and related technologies within VIDIZMO’s products and services. 

Application. AI systems should be governed across their entire lifecycle, including research, design, development, testing, deployment, and post-deployment monitoring. As such, this Policy applies to all phases of AI development and use. 

Compliance and Enforcement. Compliance with this Policy is required and the responsibility of each VIDIZMO employee. VIDIZMO has implemented internal governance structures and measures to ensure that AI systems perform to the high standards set forth herein, satisfy internal and external expectations, identify, and mitigate potential negative impacts, and more generally to ensure compliance with this Policy.  

Non-compliance with this Policy may result in disciplinary action, up to and including termination of employment or contracts. VIDIZMO will also work with partners and third-party vendors to ensure their practices align with this Policy. 

Responsible AI Principles

In alignment with the established enterprise values, we are committed to upholding ethical principles in development, deployment, and use of AI technologies. These principles are in line with the trustworthy characteristics, as outlined in the National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF). 

VIDIZMO Responsible AI principles encompass the following: 

Transparency. VIDIZMO’s goal is to inform users as appropriate when and how AI is employed in our technologies, the intent of the AI, the data use, and any potential impact on users in a manner that is accessible, transparent, and understandable. 

VIDIZMO is committed to integrating AI thoughtfully, ensuring it adds meaningful value rather than being included arbitrarily. AI features are purposefully designed to complement our products. This involves enhancing essential business workflows, improving business outcomes, and simplifying content management processes. 

Accountability. VIDIZMO requires teams to ensure that there is a clear line of accountability for AI and technology projects, with designated personnel responsible for ethical oversight. Regular audits will be conducted to assess the ethical implications of AI systems. VIDIZMO will maintain channels for reporting ethical concerns related to AI and technology use. 

VIDIZMO uses a comprehensive AI methodology covering all aspects of responsible AI system development, including:

  • Determining the motivation behind the data, for what purpose was the training dataset created, who created it, and who funded or sponsored the dataset creation. 
  • Understanding the composition of the data, whether the data is structured or unstructured, whether the sample is representative of the larger dataset, and checking if any sensitive information is contained within the dataset. 
  • Ensuring that the dataset used for training and finetuning AI models was obtained through explicit consent. 
  • Preprocessing data, including cleaning, labeling, and augmenting data, before using it to train AI models. 
  • Training AI models and training algorithms using techniques, such as supervised learning (SL), reinforcement learning (RL), etc. 
  • Validating and testing AI models using a comprehensive validation process that uses separate, live datasets to evaluate their accuracy and identity potential overfitting. 
  • Keeping track of changes made and ensuring traceability using data and model versioning to easily reproduce, share and release AI models. 
  • Monitoring the performance of AI models to ensure they are functioning optimally and making necessary adjustments. 

Fairness and Non-discrimination. Fairness in artificial intelligence (AI) involves mitigating algorithmic biases and basing decisions on certain protected factors, such as race, gender, sexual orientation, age, and religion. 

VIDIZMO strives to develop and deploy AI systems that are trained on datasets that reflect the diversity of society and design models in ways that allow them to learn and adapt over time without developing system bias and computational and statistical bias. 

For example, consider an AI system designed to help loss prevention officers or law enforcement agents to detect theft. When trained on biased data, this system might “learn” that people of certain race are involved in theft incidents and discriminate against them. 

VIDIZMO seeks to identify and remediate any harmful bias within our training data and applications using unfairness mitigation algorithms. These algorithms, including reduction and post-processing algorithms, prevent discriminatory outcomes against an individual, group, or community based on race, gender, sexual orientation, age, religion, or other protected characteristics. 

Privacy and Security. VIDIZMO is committed to protecting user privacy and will adhere to robust data protection standards in AI development during both training and inference phases. 

VIDIZMO ensures the privacy and security of data in the training phase as follows: 

  • Ensuring that customer data is not used to train AI models. 
  • Protecting against privacy violations by applying data anonymization techniques, such as data masking, tokenization, and differential privacy to datasets for removing instances of personally identifiable information (PII). 
  • Encrypting data both at rest and in transit using AES-256 and TLS encryption standards. 
  • Integrating security testing into the existing development process using secure software development life cycle (SSDLC). 
  • Capturing the actions of participants in a data-sharing environment using tamper-proof privacy auditing and monitoring. 

In the inference phase, VIDIZMO ensures data privacy and security using the following methods: 

  • Protecting data at rest and in transit using advanced encryption standards. 
  • Ensuring that minimum data is used for protecting privacy the minimizing the amount of information required. 
  • Running AI model predictions inside VIDIZMO’s secure infrastructure or in the customer’s own infrastructure, in the case of on-premises deployment, using local processing options, rather than sending it to a remote server for processing 
  • Ensuring AI model security throughout the entire model development life cycle and protecting entry points of end-user devices using endpoint security. 
  • Performing stateless inference to ensure that the inference is independent of all other inferences using the model. 

Safety and Reliability. VIDIZMO designs and test AI systems and their components to ensure they are safe, reliable, and perform as intended or behave within a defined range of acceptability when subjected to expected conditions of use. AI systems will be designed to allow human intervention where necessary, ensuring that critical decisions are not solely dependent on AI. 

Social Responsibility. VIDIZMO will assess the social impact of AI technologies, ensuring that their deployment contributes positively to society and avoids harm. We are committed to aligning our AI initiatives with broader goals of environmental sustainability and social good. 

User Empowerment. VIDIZMO will inform users about AI’s role in our products and will have the choice to opt-in or opt-out where applicable. VIDIZMO will provide mechanisms for our customers to provide feedback on AI systems and raise any concerns for review and action. Users have the flexibility to select and activate specific AI features, ensuring they align with their organization’s ethical standards, guidelines, and objectives. They also can manage how and when AI is incorporated into their processes. 

Governance and Implementation

Ethics Committee. An internal Ethics Committee will provide executive leadership and insight, including direction, mandates, and resourcing for responsible AI efforts, in a timely manner. The Ethics Committee has representation across senior-level management. Its responsibilities include: 

  • Overseeing the adherence to this Policy; 
  • Reviewing AI projects for ethical considerations; 
  • Addressing any ethical issues that arise. 

Training and Awareness. All employees involved in AI and technology projects will receive regular training on ethical AI practices and the principles outlined in this Policy. 

Continuous Improvement. VIDIZMO commits to continuously reviewing and updating this Policy as technology evolves and new ethical challenges emerge. Feedback from stakeholders will be incorporated into Policy revisions. 

back to top