top of page

Online Safety Act Compliance (UK)

Updated: February 2026
Service Classification: User-to-User Service (Likely to be Accessed by Children)
Jurisdiction: United Kingdom

Liaura Limited confirms that it operates in accordance with the UK Online Safety Act 2023 and relevant Ofcom guidance applicable to regulated user-to-user services likely to be accessed by children.


Liaura is a safeguarding-first digital social learning platform designed for children aged 6–13 (core demographic 6–10). The platform enables moderated user-to-user interaction within a guardian-managed environment. Safety considerations are embedded at architectural, governance and operational levels.


This statement summarises our compliance framework and safety-by-design approach.



1. Service Classification


Liaura is a regulated user-to-user service under the Online Safety Act. The platform enables:


  • Direct messaging between approved users

  • Media uploads within moderated spaces


Liaura does not operate:


  • Public follower systems

  • Algorithmic recommender feeds

  • Open public discovery

  • Infinite scroll content loops

  • Behavioural advertising systems


These exclusions are deliberate safety design choices intended to limit the virality of harm and reduce systemic risk exposure.



2. Risk Assessments


In accordance with Sections 9 and 12 of the Online Safety Act, Liaura maintains:


  • An Illegal Content Risk Assessment (ICRA)

  • A Children’s Risk Assessment (CRA)


These assessments follow Ofcom’s structured framework and are reviewed annually or upon significant service change. Records are retained for regulatory inspection.



Key Risk Areas Assessed


  • Grooming and child sexual exploitation

  • Hate speech and public order offences

  • Narcotics and weapons content

  • Cyberbullying

  • Harmful peer dynamics

  • Exposure to self-harm or suicide-related content

  • Risks arising from media uploads


Residual risk is mitigated through structural controls, moderation systems and governance oversight.



3. Safety-by-Design Architecture


Liaura operates a safety-by-design model incorporating:



Structural Controls


  • Guardian-linked child accounts (children cannot independently self-register)

  • No public discovery or open broadcast features

  • No popularity metrics or follower counts

  • Controlled connection approvals



Proactive Moderation Controls


  • Managed pattern-matching system to block prohibited language prior to delivery

  • Community flagging tools on all interactions

  • Real-time notifications to Liaura safeguarding staff

  • Guardian notification pathways where appropriate

  • Quarterly “shadow testing” to validate control efficacy



Media Upload Controls


  • Structured upload review processes

  • Escalation thresholds for flagged media

  • Ongoing enhancement of detection capability



4. Age Assurance


Liaura is implementing a Highly Effective Age Assurance (HEAA) model for guardian onboarding.


This includes integration of Stripe Identity-based verification for adult account holders. This replaces low-effectiveness self-declaration methods with verified identity confirmation, ensuring:


  • The account holder is an accountable adult

  • Children cannot independently create accounts

  • Guardian oversight is structurally embedded



5. Reporting & Complaints Mechanisms


Liaura provides:


  • In-platform reporting tools accessible on all interactions

  • Structured moderation review

  • Guardian notification for safeguarding-relevant incidents

  • Documented moderation decisions

  • A defined appeal pathway


All reports are logged and reviewed proportionately.



6. Data Preservation & Law Enforcement Cooperation


In accordance with preservation obligations and safeguarding standards:


  • Flagged content and associated metadata are retained for a minimum of six months

  • Additional retention may apply where required for safeguarding or legal purposes


Liaura cooperates with lawful UK law enforcement requests in accordance with applicable data protection legislation.



7. AI Moderation Enhancement Roadmap


Liaura currently operates a structured text-based moderation engine.


To enhance detection capability and reduce residual risk, Liaura is evaluating a transition to a contextual AI moderation stack through a specialist safeguarding partner (e.g., RoseShield or equivalent).


This enhancement would:


  • Detect grooming patterns beyond keyword triggers

  • Introduce contextual behavioural risk modelling

  • Implement human-in-the-loop review processes

  • Improve auditability and moderation transparency


Prior to deployment, Liaura will conduct:


  • A Data Protection Impact Assessment (DPIA)

  • Vendor security and governance review

  • Proportionality assessment aligned to Ofcom guidance


No enhanced AI moderation deployment will occur without documented governance approval.



8. Governance & Accountability


Liaura Limited has appointed a Named Accountable Person responsible for Online Safety Act compliance.


The Board oversees:


  • Risk assessment reviews

  • Moderation effectiveness

  • Safeguarding incident monitoring

  • Systemic risk evaluation


Records are maintained in accordance with Ofcom documentation expectations.



9. Proportionality & Continuous Improvement


Liaura recognises that risk evolves over time. Our compliance approach is:


  • Proportionate to platform scale and functionality

  • Reviewed annually or upon material architectural change

  • Enhanced through specialist partnerships

  • Informed by regulatory guidance updates


Liaura does not represent the service as risk-free. Instead, it operates a structured mitigation and review framework designed to reduce harm and respond rapidly where risk materialises.



10. Company & Contact Information


LIAURA LIMITED

Company number: 16228737


Registered office address:

Capital House

272 Manchester Road

Droylsden

Manchester

England

M43 6PW


For regulatory, safeguarding or compliance enquiries:

hugh@liaura.app

bottom of page