AI Module Compliance Page

As we develop modules for AI summaries, particularly for tender documents, ensuring compliance with applicable regulations is paramount. Our goal is to guide customers in addressing critical areas such as transparency, data protection, bias mitigation, and system performance. 

This page outlines the key aspects to help customers understand the risks associated with AI, providing essential information to ensure their use of AI technology is responsible and compliant with industry standards.

1. Tender Summarization Module

Purpose:

To provide suppliers with a concise summary of tender documents, highlighting the key aspects, enabling them to quickly understand the requirements and decide on their participation.

Process:

Data Input:   The module ingests full tender documents from publicly available databases.

Framework Application: A predefined framework is applied to identify and extract key aspects of the tenders. This framework includes criteria such as:

Project Scope: Description of the project or service required.

Budget: Estimated cost or financial constraints.

Timeline: Deadlines for submission and project completion.

Eligibility Requirements: Qualifications and criteria suppliers must meet.

Submission Guidelines: Instructions on how to submit bids.

Summarization: Using Natural Language Processing (NLP), the module processes the tender documents to generate a concise summary focusing on the key aspects identified.

Display: The summarised information is then presented to suppliers through a user-friendly interface, allowing them to quickly grasp the essential details of the tender.

2. Tender Matching and Labelling Module

Purpose:  

To match suppliers with tenders that best fit their capabilities and criteria, optimizing their participation.

Process:

Data Input: The module takes detailed tender documents and supplier profiles as inputs.

Criteria Extraction: Key aspects such as price, type of service or good, and other relevant factors are extracted from the tenders using NLP techniques.

Numeric Labelling: These aspects are translated into a framework with numeric labels, representing various attributes such as:

Price Range:   Numeric value indicating the budget category.

Service Type:   Numeric code corresponding to the type of service or product required.

Complexity Level:   Numeric rating of the project's complexity or technical requirements.

Geographical Location:   Numeric code representing the location of the project.

Supplier Profile Matching:   Suppliers provide their criteria, such as the services they offer, price range, and geographical preferences. These criteria are also converted into numeric labels.

Matching Algorithm:   The module uses a matching algorithm to compare the numeric labels of tenders and suppliers. This algorithm calculates the degree of match based on the provided criteria.

Recommendation Display:   The module displays a list of tenders that best match the suppliers' criteria, ranking them based on the degree of match. Suppliers can then view and decide on which tenders to pursue.

3.Practical Application

Tender Summarization Module:  

Scenario: A supplier visits the platform and wants to quickly assess if any tenders are worth their time.

Action: They view the summarized tenders, instantly seeing the project scope, budget, timeline, and requirements without reading lengthy documents.

Benefit: Saves time and helps the supplier quickly decide which tenders to consider.

 

Tender Matching and Labeling Module:  

Scenario: A supplier specifies that they are looking for tenders related to IT services with a budget between $50,000 to $100,000 in the healthcare sector.

Action: The module labels their criteria and matches it against available tenders, providing a ranked list of tenders that fit their specified criteria.

Benefit: Optimizes the supplier’s chances by showing the most relevant tenders, increasing the efficiency of their bidding process.

These AI modules work together to streamline the tender participation process, making it easier for suppliers to identify and engage with opportunities that best fit their capabilities and interests.

Compliance 

4. Data Management and Privacy

Question: What data sources will be used for training and operating the AI summarization module? Are these sources compliant with relevant data protection laws?

Response:   The data sources will include publicly available databases where tenders are published, such as government procurement websites and industry-specific tender platforms. These sources are compliant with relevant data protection laws as they are publicly accessible and operate under legal frameworks that ensure transparency and data protection. The data will be processed in accordance with the same legal basis used for these databases, ensuring adherence to regulations like the GDPR and other applicable data protection laws.

 

5. Transparency and Explainability

Model Explainability  

Question: How will the AI model's decision-making process be documented and made understandable to users?

Response: The AI system’s decision-making process will be documented through detailed technical documentation. This documentation will include explanations of the algorithms used, the data preprocessing steps, the criteria for summarization, and the labeling methodology. Additionally, we will provide visual aids and examples to illustrate how the model processes input data to generate summaries and labels. User interfaces will include tooltips and help sections to explain key features and functions in an accessible manner.

 

Auditability:

Question: What mechanisms are in place for auditing the AI module’s outputs and ensuring they align with expected standards?

Response: Regular audits will be conducted to evaluate the outputs of the AI module. These audits will involve:

Internal Reviews: Periodic internal reviews by our technical team to assess the accuracy and relevance of the summaries and labels.

External Audits: Engagement with independent third-party auditors to review and validate the module's performance and compliance with standards.

Quality Control Checks: Implementation of automated quality control checks that flag anomalies or deviations from expected outputs.

Reporting and Logging: Maintenance of detailed logs and reports on the module's outputs, enabling traceability and accountability.

 

6. Bias and Fairness

Bias Mitigation

Question: What steps are being taken to identify and mitigate any biases in the AI model’s training data and outputs?

Response: To identify and mitigate biases, the following steps will be taken:

Data Diversity: Ensuring that the training data includes a diverse range of tenders from various sectors and regions to prevent skewed outputs.

Regular Updates: Regularly updating the training data to include new and varied tenders, reducing the risk of outdated or biassed information.

Human Oversight: Incorporating human oversight in the review process to catch and address any biases that automated tools might miss.

 

7. Performance and Accuracy

Evaluation Metrics

Question: What metrics will be used to evaluate the performance and accuracy of the AI summaries?

Response: The performance and accuracy of the AI summaries will be evaluated using the following metrics:

Precision:The proportion of relevant information correctly identified in the summaries.

Recall: The proportion of all relevant information that is captured in the summaries.

F1 Score: The harmonic mean of precision and recall, providing a single metric for overall accuracy.

User Satisfaction: Feedback from users regarding the usefulness and relevance of the summaries.

Comparison to Benchmarks: Comparison of AI-generated summaries against human-generated benchmarks to ensure quality.

 

Continuous Monitoring

Question: How will we monitor the AI system's performance over time to ensure consistent quality and accuracy?

Response: Continuous monitoring will be implemented through:

Automated Monitoring: Automated systems to track the performance metrics and alert the team to any significant changes or declines in quality.

Regular Performance Reviews: Scheduled performance reviews to analyse metrics and make necessary adjustments.

User Feedback Analysis: Ongoing collection and analysis of user feedback to identify areas for improvement.

Performance Logs: Detailed logs of system performance over time, allowing for trend analysis and proactive adjustments.

 

8 User Interaction and Feedback

User Feedback Mechanism:  

Question: How will we collect and incorporate user feedback to improve the AI summaries?

Response: User feedback will be collected through multiple channels, including:

Surveys: Periodic surveys to gather detailed feedback on user experiences and satisfaction.

Customer Support: A dedicated customer support team to handle feedback, inquiries, and complaints.

Incorporation Process: Feedback will be reviewed regularly by our product development team, and actionable insights will be incorporated into subsequent updates and improvements.

 

Error Reporting:  

Question: What processes will be in place for users to report errors or inaccuracies in the summaries?

Response: Users will be able to report errors or inaccuracies through:

Support Tickets: A system for submitting support tickets to our customer success team, ensuring that all reports are tracked and addressed.
Response Protocol: A defined protocol for our technical team to investigate and resolve reported errors promptly.

Transparency: Users will be informed about the status of their reports and any actions taken to address the issues.



9 Liability Limitation

Mercell does not guarantee the accuracy or error-free output of the AI modules. While every effort is made to ensure that the information provided is reliable and up-to-date, the nature of AI-generated summaries and matches means that there may be occasional inaccuracies or errors. Therefore, it is imperative that users exercise their own judgment and due diligence when utilizing the outputs of these modules. Users should not solely rely on the AI-generated information for making decisions and should always cross-check with the original tender documents.

Furthermore, users are responsible for conducting their own risk assessments, including compliance with all relevant regulations and legal requirements. This includes but is not limited to, ensuring adherence to procurement laws, financial guidelines, and industry standards. Mercell does not assume liability for any decisions made based on the AI module outputs, and users are advised to consult with appropriate professionals or legal advisors to ensure that their actions meet all necessary regulatory and compliance obligations.

Human oversight is essential in the utilization of AI module outputs. While the modules are designed to assist in the summarization and matching of tenders, they are tools that support, rather than replace, human decision-making. Users should actively engage in the review process, apply their expertise, and conduct comprehensive risk assessments. This oversight ensures that all decisions are well-informed and compliant with both internal policies and external regulations.

 



FAQ

1. Data Management and Privacy

Data Sources: What data sources will be used for training and operating the AI summarization module? Are these sources compliant with relevant data protection laws?

We are using data from several publicly available tender sources. We are using both the Mercell generated tender metadata as well as  documents included in the tender. NOTE: We are using foundational models (LLMs like OpenAI’s GPT-4o) and are using it in a way that our data is explicitly opted out for training. Hence, we are not training any models. 

Data Storage: Where will the data be stored, and what measures are in place to protect this data from unauthorized access?

We are storing the data in a dedicated AWS account which follows the same general policies as other Mercell AWS accounts. Access is possible in two ways:

  • Through AWS IAM users/roles having the right permission set. At time of writing this is only available for developers working on the system

  • Through our REST API while having the right user credentials. This is secured through the oauth2 compliant client credentials flow only. Credentials need to be created by the developer team. 

 

2. Transparency and Explainability

 

Model Explainability: How will the AI model's decision-making process be documented and made understandable to users?

The decision process of the AI model is not transparent and this is also impossible to achieve. We are using proprietary foundational models which do not document how they have been trained or how they generate results. We can provide transparency for the process that is using this model and how the model is used. The model itself is a black box.  

 

Auditability: What mechanisms are in place for auditing the AI module’s outputs and ensuring they align with expected standards?

We have several quality control mechanisms:

  1. Human evaluation of samples. We check our summaries with expert knowledge from within Mercell for accuracy and completeness (precision and recall).

  2. Machine evaluation of samples. We use a set of different benchmark scores to evaluate the summary output against a set of reference summaries. This makes it possible to quantify the summary accuracy and completeness. 

  3. Machine evaluation at scale. We use a the same set of benchmark scores and acceptability criteria and evaluate each summary before it is being presented to the customer/user. 

  4. User evaluation. The end-users can provide feedback on the summaries, both binary (is it a good or bad summary) as well as qualitative (what is missing, what can be improved). This feedback will be incorporated in both our human as well as our machine evaluation of results.

Out of these evaluations, 1 and 2 are currently in place. 3 and 4 are being planned. 

  

3. Bias and Fairness

Bias Mitigation: What steps are being taken to identify and mitigate any biases in the AI model’s training data and outputs?

We are not using the foundational models’ internal knowledge, but instead are using it as a way to process the documents. We instruct the models to only use the information available in the documents. This leaves minimal room for bias outside of the possible bias encaptured in the document themselves. 

 

Fairness Checks: How will we ensure that the summarization is fair and non-discriminatory across different types of tender documents?

The model is evaluating all the documents and will look for the most accurate information. Since the output of the model is not 100% deterministic it cannot be guaranteed that it will be 100% the same all the time, but in principle we are processing factual documents and retrieving factual information out of all of them. No mechanism is in place to provide special treatment to certain types of documents. 

 

4. Performance and Accuracy

Evaluation Metrics: What metrics will be used to evaluate the performance and accuracy of the AI summaries?

 

See answer at “2. Auditability”

Continuous Monitoring: How will we monitor the AI system's performance over time to ensure consistent quality and accuracy?

 

See answer at “2. Auditability” point 3 and 4. 

 

5. User Interaction and Feedback

User Feedback Mechanism: How will we collect and incorporate user feedback to improve the AI summaries?

There will be two ways for users to provide feedback. A thumbs up/down button (or equivalent) and an input field to provide qualitative inputs in summary quality. This feedback will be evaluated by the AI team and used to improve the product.

 

Error Reporting: What processes will be in place for users to report errors or inaccuracies in the summaries?

See previous answer. On top of that they can use the same feedback/support system Mercell has in place currently.