cechat Pilot Program
FAQs

Welcome to the cechat AI Pilot FAQ! We're here to answer all your burning questions about this cutting-edge tool designed to enhance communication and learning. Curious about how cechat is set up? We've got you covered! Wondering how we ensure top-notch security and privacy? Rest easy, we've made that a priority. You'll also find answers on how cechat's core features operate, the contract conditions, and pricing—spoiler alert: it's budget-friendly! Whether you're new to AI or a seasoned pro, this FAQ is packed with everything you need to know to get started smoothly and securely.

  • Other than the statement in the terms and conditions, how are you ensuring that the product is appropriate for +13/Year 7 use?

    The pilot project has been built to reflect the Framework for the use of Generative AI in Catholic Schools” (the Framework) that has been developed by CEnet in conjunction with the Members, Catholic Scholars and is based on both the Rome Call to Ethics and the "Australian Framework for Generative Artificial Intelligence in Schools" (Commonwealth of Australia, 2023). Indeed the Framework has been embedded as one of the overriding system messages that accompanies all user prompts and replies. In addition to building in these controls at an Agent level, each LLM available for use has been selected for its ability to detect and correct for bias, inaccuracies and other common AI issues. It should be noted that even with these protections in place it may be possible to “jailbreak” the AI to provide responses that are inaccurate and possibly offensive.  To assist in making cechat more robust, schools have been selected to pilot the system and report back improvements, inconsistencies and other system limitations. These will be fed back to the agent developer and the LLM provider where required.

    Is the product browser based or accessed through an app?

    Both Mecha and Cogniti are browser based services. Both offer plugins to other products such as Canvas and Google chrome.

    Is location tracked ?

    No location is not tracked.

    How do users of the product authenticate to the solution, what is the minimum information required?

    Users authenticate using Okta or where no Okta is available SAML 2.0. This allows connections from common SAML compliant authentication providers including Microsoft and Google.

    What considerations have been assessed in relation to student protection?

    Student protection is at the centre of the cechat architecture, this includes: the use of strong authentication, having the entire solution (cogniti and Mecha) installed in CEnet’s Microsoft Azure environment, and the implementation of the AI rules guiding each of the bots. In addition to this, CEnet has engaged the Microsoft partner, Insight, to provide critical feedback on our Azure security configuration. Insight has been involved in many cogniti deployments.

    Is the data stored outside of the CEnet tenant, and if so by whom?

    All data remains within the CEnet Microsoft Azure tenancy.

    Is there a mechanism to restrict access to the product to a specific user group?

    Yes individuals, Schools, and entire Dioceses can be onboarded and managed via the authentication mechanisms in place. Individual use is logged within the tool and in the CEnet azure tenancy.

    Are the data centres ISO27001 compliant?

    Yes, all Microsoft Data centres in Australia are ISO27001 compliant. ISO 27001 is a globally recognised standard for information security management systems. It sets out the policies and procedures needed to protect organisations and includes all the risk controls necessary for ensuring the security of information. Microsoft takes security and compliance very seriously. Their data centres are designed, built, and managed using a layered security model and principles of defence in depth. This includes a comprehensive range of compliance offerings such as ISO 27001.

    However, it's always a good practice to check the most current compliance status directly from Microsoft's official Trust Center website, where they provide detailed and up-to-date information about their compliance offerings. 

    How do you ensure data privacy,  anonymity  and security?

    Cechat does not solicit, collect or store personal information however if personal information is inadvertently disclosed, these chats are stored in the secure on-tenancy log and can be removed as required. No chat information is used to train the Large Language Model.

    Here's how CEnet has approached this issue:

    1. Anonymisation: Personal data entered into the system is not associated with an individual user's identity. The AI does not have the ability to access or retrieve personal data unless explicitly provided during the conversation. 

    2. Data Handling: Conversations are not stored or used to improve the model after the session ends. 

    3. Secure Infrastructure: Robust security measures are in place to protect data from unauthorised access or loss. This includes encryption of data at rest and in transit, as well as strong access controls.

    4. Compliance with laws: Tools and practices are aligned with privacy and data protection laws, such as the European General Data Protection Regulation (GDPR) the Australian Privacy Principles (see Section 2 of this FAQ) and other applicable legislation.

    5. User Awareness: As a part of user training all users are trained to avoid sharing sensitive personal information and to use the AI responsibly.

    These practices align with the principles noted in the Framework for the use of Generative AI in Catholic Schools, particularly those related to privacy, security, and safety (6.1 to 6.6), and are designed to uphold the dignity and rights of individuals (2.3), maintaining respect for human dignity (1.1) and the common good (1.2).

  • Is your product compliant with Australian Privacy Principles?

    In short yes, a summary of the  Australian Privacy Principles  (including links to the relevant APP) and the cechat response are summarised below:

    APP 1

    Title: Open and transparent management of personal information.

    Purpose: Ensures that APP entities manage personal information in an open and transparent way. This includes having a clearly expressed and up to date APP privacy policy.

    Cechat Response: cechat does not collect or store personal information. 

    APP 2

    Title: Anonymity and pseudonymity.

    Purpose: Requires APP entities to give individuals the option of not identifying themselves, or of using a pseudonym. Limited exceptions apply.

    Cechat Response: cechat uses existing methods of authentication and logging. These methods comply with APP 2. Current policies do not normally permit pseudonyms refer to your Diocesean or School Policy for details on how you deal with APP 2.

    APP 3

    Title: Collection of solicited personal information.

    Purpose: Outlines when an APP entity can collect personal information that is solicited. It applies higher standards to the collection of sensitive information.

    Cechat Response: cechat does not solicit, collect or store personal information.

    APP 4

    Title: Dealing with unsolicited personal information.

    Purpose: Outlines how APP entities must deal with unsolicited personal information.

    Cechat Response: cechat does not solicit, collect or store personal information however if personal information is inadvertently disclosed these chats are stored in the secure on-tenancy log and can be removed as required. No chat information is used to train the Large Language Model.

    APP 5

    Title: Notification of the collection of personal information.

    Purpose: Outlines when and in what circumstances an APP entity that collects personal information must tell an individual about certain matters.

    Cechat Response: CEnet and its members comply with this principle. If personal information is found to be disclosed existing procedures are used to inform users as required by APP 5.

    APP 6

    Title: Use or disclosure of personal information.

    Purpose: Outlines the circumstances in which an APP entity may use or disclose personal information that it holds.

    Cechat Response: CEnet and its members comply with this principle. If personal information is found to be disclosed existing procedures are used to inform users as required by APP 6.

    APP 7

    Title: Direct marketing.

    Purpose: An organisation may only use or disclose personal information for direct marketing purposes if certain conditions are met.

    Cechat Response: CEnet and its members comply with this principle. No personal information is solicited by cechat and any personal information inadvertently collected can not be used for any kind of marketing purposes.

    APP 8

    Title: Cross-border disclosure of personal information.

    Purpose: Outlines the steps an APP entity must take to protect personal information before it is disclosed overseas.

    Cechat Response: No personal information is solicited by cechat and any personal information inadvertently collected is stored in Australian data centres and is not available overseas.

    APP 9

    Title: Adoption, use or disclosure of government related identifiers.

    Purpose: Outlines the limited circumstances when an organisation may adopt a government related identifier of an individual as its own identifier, or use or disclose a government related identifier of an individual.

    Cechat Response: No personal information or government related identifier is solicited by cechat.

    APP 10

    Title: Quality of personal information.

    Purpose: An APP entity must take reasonable steps to ensure the personal information it collects is accurate, up to date and complete. An entity must also take reasonable steps to ensure the personal information it uses or discloses is accurate, up to date, complete and relevant, having regard to the purpose of the use or disclosure.

    Cechat Response: cechat does not solicit, collect or store personal information, however, if personal information is inadvertently disclosed these chats are stored in the secure on-tenancy log and can be removed as required. No chat information is used to train the Large Language Model.

    APP 11

    Title: Security of personal information.

    Purpose: An APP entity must take reasonable steps to protect personal information it holds from misuse, interference and loss, and from unauthorised access, modification or disclosure. An entity has obligations to destroy or de-identify personal information in certain circumstances.

    Cechat Response: cechat does not solicit, collect or store personal information however if personal information is inadvertently disclosed these chats are stored in the secure on-tenancy log and can be removed as required. No chat information is used to train the Large Language Model.

    APP 12

    Title: Access to personal information.

    Purpose: Outlines an APP entity’s obligations when an individual requests to be given access to personal information held about them by the entity. This includes a requirement to provide access unless a specific exception applies.

    Cechat Response: cechat does not solicit, collect or store personal information however if personal information is inadvertently disclosed these chats are stored in the secure on-tenancy log and can be provided or removed as required. No chat information is used to train the Large Language Model.

    APP 13

    Title: Correction of personal information.

    Purpose: Outlines an APP entity’s obligations in relation to correcting the personal information it holds about individuals.

    Cechat Response: cechat does not solicit, collect or store personal information however if personal information is inadvertently disclosed these chats are stored in the secure on-tenancy log and can be corrected or removed as required. No chat information is used to train the Large Language Model.

    Is all data, and any backups of data stored within Australia?

    All data and backups remain within the CEnet Microsoft Azure tenancy. 

    Is the use of Cogniti  governed by an overarching CEnet/ Diocesan/ School data agreement?

    All Dioceses who have signed the CEnet MSA are automatically covered by that MSA. For Dioceses/Schools who have not signed up to the MSA a separate data agreement can be organised.

    Are 3rd party organisations involved in the development of the product, how are they governed, and what data will they have access to?

    Currently all third party organisations involved in the creation of cechat are governed by their own internal structures. However their engagement contracts require them to meet or exceed the CEnet data privacy and ICT security requirements as specified. These contractors do not have direct access to the system databases but they will be able to examine logs for the purposes of fault finding and development.

    How is data handled and stored?

    There are a number of data entities involved in the cechat pilot project. It should be made clear that cechat does not solicit, collect or store personal information. All user identity and authorisation occurs in existing, dedicated authorisation and access systems. No passwords or other information is stored in cechat.  All chats with the agents are recorded for analysis.

    Users will be reminded in training and operation of the model that they should not include personal information in their chats, however, if personal information is inadvertently disclosed, these chats are stored in the secure on-tenancy log and can be removed as required. At no time is chat information used to train the Large Language Model. In operation, the generative AI  generates responses to user inputs in real time and doesn't store any data after the conversation ends.  The data cechat processes for a conversation is temporarily held in memory to maintain the context of the conversation, but it is not permanently retained or stored.

    It's important to note that both Microsoft and OpenAI implement robust data security measures to ensure the privacy and security of the information during the interaction. These practices align with the Framework for the use of Generative AI in Catholic Schools related to Privacy, Security, and Safety, specifically 6.1 to 6.6.

    What are your retention and destruction policies associated with trial data?

    Data from the Pilot project will be stored as per existing CEnet policies or for 18 months after the pilot is complete. If CEnet decides to discontinue the AI service, all data will be destroyed 18 months following the completion of the pilot. If CEnet decides to continue the service after the pilot all pilot data will be archived and new logs created only for those members and customers who sign up for the ongoing service.

  • What are the default restrictions in place when creating an AI Agent?

    Users can only create agents when they complete the “CEnet agent Creation Process Course” or equivalent. Currently students will not be able to create agents.

    What additional restrictions can be applied to an AI Agent at the time of creation and retrospectively?

    Agents can be created and shared by trained teachers and administrators. Teachers can delete and edit their own agents, admins can debug, modify and if necessary, delete any and all agents as required.

    Does the AI Agent have the ability to identify sensitive or identifiable information and not store it?

    That depends on how the agent is configured. It is possible to create an agent to do this, however the accuracy of this detection may vary. The LLM only stores chat information in the volatil memory of the LLM during the chat session and does not store information after the chat session has ended.

    Is training available for various constituent groups (teaching vs. technical)?

    CEnet is creating a suite of online, and face-to-face training sessions to allow users and technical staff to use cechat most effectively. Refer to the CEnet AI portal at cenet.ai. Material in this portal will change as it is updated and expanded upon. It is important to use cenet.ai as your starting point on your AI in Catholic Education Journey.

  • What LLMs are referenced by the product?

    Currently Cogniti is limited to the following LLMs. 

    • GPT-3.5 hosted by Microsoft Azure This model is used for chat completions

    • GPT-4 hosted by Microsoft Azure This model is used for chat completions

    • GPT-4 Turbo hosted by Microsoft Azure This model is used for chat completions

    • GPT-3.5 16K hosted by Microsoft Azure This model is used for chat completions

    • Text embedding ada-002 model hosted by Microsoft Azure This model is used to generate text embeddings

    • System admins can select the following models for certain aspects of the system operation. They would not normally be enabled on a Members organisation.

    • GPT-3.5 from OpenAI This model is used for chat completions

    • GPT-4 from OpenAI This model is used for chat completions

    • Mecha uses multiple LLMs. It will be possible at some point during the pilot for users to select the LLM of their choice for their agent.


    How do you approach the issue of bias, and mitigate bias in the response process?

    All current AI models have to deal with inherent bias, hallucination and other AI related limitations. It is not possible to fully isolate users from these issues however CEnet approaches this problem in two main ways. Firstly, the AI principles (Framework for the use of Generative AI in Catholic Schools) are embedded in all user agents at a system level. This helps steer the underlying models to avoid these issues. The second approach is to use models that have active development and bias detection built in by OpenAI. 

    Importantly, users should verify any critical information provided by the AI using other reliable resources, especially when it is used for educational purposes. It's a good opportunity to promote critical thinking, where students assess the reliability and validity of the information they receive.

    This discussion also aligns with the principles mentioned in the Framework for the use of Generative AI in Catholic Schoolsmessage, particularly 1.2 Instruction and 1.4 Critical Thinking. It emphasizes the importance of understanding the capabilities and limitations of AI tools, and the need to enhance critical thinking rather than restrict human thought.


    How do you ensure that any content or reference material is aligned with the most current Australian curriculum, and any additional state specific content?

    To ensure alignment with the Australian Curriculum in the responses from cechat, the following approaches are used:

    • Content Understanding: The LLM AI has been trained on a diverse range of data, including publicly available text related to education. While no ChatGPT model has been specifically trained on the Australian Curriculum, the training data used included this material.  The model can generate responses based on the corpus of data it was trained on. This includes the Australian Curriculum and publicly available State based educational data. cechat can further be “steered” using locally produced “resources” to provide grounding in local issues and conditions.

    • User Input: The user plays a crucial role in guiding the conversation. Clear and specific questions can help elicit responses that align better with the Australian Curriculum. For example, asking about the 'stages of the water cycle' will likely yield a response that aligns with Year 4 Science in the Australian Curriculum.

    • Continual Learning: Individual LLM AI models do not have the ability to learn or adapt to specific curricula in real-time, but improvements are continually being made to its training and fine-tuning processes and these new and improved models are being added to cechat as they become available.

    • Critical Evaluation: Teachers, students, and other users should critically evaluate the AI’s responses for alignment with the curriculum. If there's any doubt about the correctness or relevance of the information provided, it should be verified using a reliable source.

    Remember, the AI can only provide support in the learning process, the responsibility for aligning teaching content with the Australian Curriculum ultimately remains with teachers. They are the subject matter experts and are best positioned to interpret and implement the curriculum. This underscores the principle 1.3 Teacher expertise in the guiding statements.

    Is it possible to curate the knowledge base for individual diocese or school specific needs?

    Yes it is possible to add specific “resources” to an agent that will help ground the agent in the knowledge of a particular diocese or school. It is possible to have these resources added on an agent by agent basis.

    How scalable is the product as growth and demand increase?

    Currently the University of Sydney app.cogniti.ai instance hosts over 2k agents, 15k users, which includes USYD people and other Cogniti users. Currently app.cogniti.ai  runs 20 servers during peak times and 5 during off-peak. CEnet will follow suit as the load grows as maintaining a good experience is important.

    Are the prompts input into the AI agent used to train the Large Language Model (LLM)?

    User prompts are never used to train the underlying model.

    What approved data sources does Cogniti use to generate content?

    Currently there is no Approval body to regulate the sources of data used to train LLMs. To address this limitation the LLMs used by cechat are all private instances of commercial products provided to CEnet by Microsoft under their arrangement with openAI. These are the same LLMs used in Microsoft copilot.  All Microsoft supplied LLMs  have been trained on large datasets that are based on content derived from the Internet. The models are further developed before being released to Microsoft customers. All other data used to “ground” and “steer” the agents are provided by the agent owners. These include teachers, admins and other people trained to create agents. Typically students will not be permitted to create agents at this time. Should an approval body be forthcoming CEnet will move to comply with any and all recommendations of this group.

  • Is there a draft agreement that we can assess?

    There are no additional agreements to enter into. This pilot is a part of the current CEnet services offering. All Dioceses are entitled to access the pilot without the requirement of any further agreements.

    What are the indicative costs after the trial period?

    These costs are unknown at this time. This is one of the key outcomes of the pilot project. CEnet will cover the costs of development and access during the pilot program and indicative costs will be made available as soon as that information is available to assist with Diocesan budgeting for 2026.

    If the product utilises Copilot, how will this coexist with our current Microsoft A5 licensing, will there be an additional charge?

    cechat does not use Microsoft Copilot and is not licensed to Microsoft and has no impact on existing Diocesan Microsoft agreements. One of the goals of the pilot program is to develop an API integration with Microsoft Copilot to maximise the benefit to those Dioceses using Copilot. There will be no additional licensing charges for cechat post the pilot program. There will however be a user pays charging mechanism put in place so those Members or Customers who wish to continue using cechat post pilot.