Acceptable Use Policy
Type: Acceptable Use Policy · Version: 2026-04-17 · Published: 2026-04-17 11:24 UTC
════════════════════════════════════════════════════════════════
MAKRR — ACCEPTABLE USE POLICY
Version 1.0 · Effective 2026-04-17
Trashify Tech OÜ · Registry code 16495334
════════════════════════════════════════════════════════════════
AT A GLANCE
This Policy tells you what you may and may not do with the MAKRR
Service, its outputs, and MAKRR devices. Breaking it is a
material breach of the Terms of Service. In serious cases we
will suspend or terminate your account, report to authorities
where required by law, and hold you responsible for the
consequences.
This Policy is incorporated into the Terms of Service. Capitalised
terms have the meanings given there.
────────────────────────────────────────────────────────────────
SECTION 1. PER-UPLOAD ATTESTATION
────────────────────────────────────────────────────────────────
Every time you upload an image, video, dataset, prompt,
annotation or other content to the Service, you confirm FOR THAT
UPLOAD that:
(a) you either own the content outright or hold all the
rights, licences, consents and permissions needed to
upload it, process it, annotate it, train on it, and
deploy any model derived from it;
(b) any identifiable person, brand, artwork, property, or
other protected subject matter depicted is included
lawfully, and you have a documented lawful basis under
the GDPR (or equivalent) for processing their personal
data;
(c) the content complies with this Policy and all applicable
law in each jurisdiction relevant to your use.
We record this attestation with the upload. Uploading content
without these rights is a material breach and triggers clause 13
of the Terms of Service (indemnity) in full. You bear all
consequences of an upload that breaches this attestation, to
the extent permitted by mandatory law.
────────────────────────────────────────────────────────────────
SECTION 2. PROHIBITED CONTENT
────────────────────────────────────────────────────────────────
You must not upload, generate, store, deploy or distribute
through the Service any content that falls into any of the
following categories.
2.1 Illegal or harmful to persons.
(a) child sexual abuse material ("CSAM") or any content
that sexualises minors. We report CSAM to competent
authorities (Estonian Police and Border Guard, INHOPE
hotlines, and, where applicable, the US National Center
for Missing & Exploited Children) without notice to
the uploading user;
(b) non-consensual intimate imagery ("revenge porn") or
deepfake intimate content of any real person;
(c) content that facilitates child grooming, trafficking,
or exploitation of persons;
(d) content within the scope of Regulation (EU) 2021/784 on
the dissemination of terrorist content online, or that
depicts or incites terrorism, genocide or mass violence;
(e) content that depicts or incites self-harm or suicide;
(f) content containing incitement to violence or hatred
based on a protected characteristic (race, ethnicity,
nationality, religion, disability, sex, sexual
orientation, gender identity), contrary to Framework
Decision 2008/913/JHA and national transposition;
(g) content that depicts or promotes illegal narcotics,
unlawful firearms trade, or services unlawful in the
relevant jurisdiction.
2.2 Infringing content.
(a) content that infringes any third party's copyright,
trade mark, design right, patent, database right, moral
right, right of publicity or privacy;
(b) content scraped, copied or extracted from a third-party
service in breach of that service's terms;
(c) content obtained in breach of a contractual
confidentiality or trade-secret obligation owed to a
third party.
2.3 Deceptive content.
(a) content crafted to impersonate a real person,
organisation or authority in order to deceive;
(b) content generated to mislead voters in an election,
contrary to the Digital Services Act and national
electoral laws;
(c) synthetic or manipulated content that fails to comply
with the transparency obligations of Article 50 of the
AI Act.
2.4 Malicious content.
(a) malware, viruses, rootkits, Trojans, spyware, adware,
or any code designed to harm, disable, or gain
unauthorised access to systems;
(b) phishing kits, credential harvesters, or social-
engineering templates;
(c) content that embeds executable payloads or exploits
vulnerabilities in our systems or in our customers'
systems.
2.5 Unlawful surveillance.
(a) content obtained by covert surveillance of workers,
customers, or members of the public where such
surveillance is unlawful in the relevant jurisdiction
(for example, in the EU, processing without a GDPR
Art. 6 lawful basis and required safeguards);
(b) content obtained by recording equipment placed in
private spaces (changing rooms, bathrooms, bedrooms,
medical facilities) without informed consent;
(c) content obtained in violation of workplace or labour
law on employee monitoring, including any agreement
with a works council or data-protection representative.
────────────────────────────────────────────────────────────────
SECTION 3. PROHIBITED USE CASES
────────────────────────────────────────────────────────────────
You must not use the Service, any model trained through the
Service, or any MAKRR hardware for any of the following purposes.
3.1 Safety-critical applications.
(a) medical diagnosis, triage or treatment of human
patients;
(b) operation as a component of autonomous vehicles,
aviation, aerospace, railway or maritime safety
systems;
(c) control of nuclear facilities or processes;
(d) weapons targeting, lethal autonomy, or any military
end-use, in particular in an embargoed destination;
(e) safety control of critical infrastructure (energy,
water, telecommunications);
(f) any "life-support" or similar life-or-safety
application.
Use in any of the above is permitted ONLY under a separate
written agreement with us that specifically authorises the use
and establishes the necessary safeguards and regulatory
approvals.
3.2 Biometric identification and emotion recognition.
(a) using the Service to identify, verify or re-identify
natural persons by their biometric features (face,
gait, voice) in publicly accessible spaces, except as
expressly permitted under Article 5(1)(h) of the AI
Act and subject to all conditions there set out;
(b) inferring emotions, stress, personality or similar
states of natural persons in the workplace or in
education settings, contrary to Article 5(1)(f) of the
AI Act;
(c) categorising natural persons on the basis of biometric
data to deduce or infer race, political opinions,
trade-union membership, religious or philosophical
beliefs, sex life, or sexual orientation, contrary to
Article 5(1)(g) of the AI Act.
3.3 Social scoring.
Evaluating or classifying natural persons based on social
behaviour or known, inferred or predicted personal
characteristics, for general-purpose scoring of trustworthiness
or similar, contrary to Article 5(1)(c) of the AI Act.
3.4 Decisions with legal or similarly significant effect.
Making automated decisions about a natural person concerning
employment, access to credit, insurance, housing, education,
essential public or private services, benefits, law enforcement,
migration, or the administration of justice, without MEANINGFUL
HUMAN REVIEW and the safeguards required by Article 22 GDPR and
the AI Act.
3.5 Surveillance of workers.
Continuous, individually-identifying surveillance of workers in
the EU/EEA or UK where such surveillance is unlawful under
applicable labour law, collective agreements, works-council
agreements, or data-protection law.
3.6 High-risk AI Act use cases.
Placing on the market, putting into service, or using as a
"deployer" a system that would be a high-risk AI system under
Annex III of the AI Act, without first:
(a) notifying us in writing at compliance@makrr.ai, at
least 30 days before go-live where practicable;
(b) performing and documenting the deployer obligations
under Article 26 of the AI Act (including human
oversight, input-data quality, logging, data-subject
transparency, and use in accordance with the provider's
instructions);
(c) completing, where applicable, the fundamental-rights
impact assessment under Article 27; and
(d) ensuring any registration in the EU database required
under Article 49 is carried out.
3.7 Other unlawful or harmful uses.
(a) breaching any sanction or export control (see clause 8
of the Terms and clause 7 of the EULA);
(b) harassment, stalking, defamation, or other abusive
behaviour;
(c) market manipulation, securities fraud, insider trading;
(d) operation of unregistered financial services,
unlicensed gambling, or any regulated activity without
the necessary licence;
(e) circumvention of legally-imposed content filtering or
age verification.
────────────────────────────────────────────────────────────────
SECTION 4. TECHNICAL AND SYSTEM-INTEGRITY RULES
────────────────────────────────────────────────────────────────
You must not:
(a) bypass, disable or interfere with any security feature,
authentication mechanism, rate limit, credit accounting,
access control, watermark, telemetry or tamper-
detection feature of the Service or the Hardware;
(b) attempt to access another Customer's account, data,
projects, models or devices;
(c) probe, scan or test the vulnerability of the Service
except under a written security-testing agreement with
us (contact security@makrr.ai);
(d) introduce systems-level load outside published rate
limits, perform denial-of-service tests, or exhaust
shared resources;
(e) scrape the Service for training data, model extraction,
or competitor benchmarking;
(f) use automated agents to create accounts or access the
Service other than through published APIs, within
published limits;
(g) reverse engineer the Service or the Firmware, or
attempt to derive source code or model weights from
on-device binaries, except to the extent mandatory law
permits (see clause 7.4 of the Terms and clause 4.3 of
the EULA);
(h) mirror, frame, or re-deliver the Service to third
parties;
(i) use the Service to develop or train a competing
product.
────────────────────────────────────────────────────────────────
SECTION 5. DATASET, IP AND THIRD-PARTY-CONTENT RULES
────────────────────────────────────────────────────────────────
Many training datasets carry licences — Creative Commons, ODbL,
custom licences, "non-commercial" or "share-alike" terms, or
explicit prohibitions on use for machine-learning training. You
are responsible for ensuring that:
(a) every dataset you upload is lawfully used in training;
(b) any "share-alike" obligation propagates to Customer
Models you export and deploy;
(c) any attribution obligation is honoured in deployment;
(d) where a dataset or image was extracted under the EU
text-and-data-mining exception of Article 4 of
Directive (EU) 2019/790, you have respected any opt-out
expressed by the rightholder in a machine-readable form
(robots.txt, ai.txt or equivalent).
The per-project data_license metadata field is a governance aid.
Selecting a value does not grant you a licence you do not hold
and does not cure non-compliant content.
────────────────────────────────────────────────────────────────
SECTION 6. PRIVACY AND GDPR RULES FOR UPLOADED CONTENT
────────────────────────────────────────────────────────────────
Where content you upload, train on or deploy depicts identifiable
persons, you must:
(a) have a lawful basis under Article 6 GDPR (and Article 9
for special-category data, including biometric data
used for identification);
(b) provide the notice required under Articles 13–14 to
data subjects, where required;
(c) keep records of processing under Article 30 and appoint
a representative or DPO where required;
(d) perform a DPIA under Article 35 for large-scale or
high-risk processing (biometric or systematic monitoring
in publicly-accessible spaces almost always requires
a DPIA);
(e) respect data-subject rights (access, rectification,
erasure, objection, restriction, portability),
including the ability to locate and remove images of a
specific individual on request;
(f) not upload special-category data (health, religion,
political opinion, biometrics used for identification,
trade-union membership, sexual orientation, sex life)
unless an Article 9(2) ground applies and you have
notified us in writing;
(g) comply with national CCTV, workplace-surveillance,
and public-space filming rules.
────────────────────────────────────────────────────────────────
SECTION 7. REPORTING AND ENFORCEMENT
────────────────────────────────────────────────────────────────
7.1 Reporting illegal content. Report illegal content or
breaches of this Policy to support@makrr.ai with the subject
line "ABUSE REPORT". See clause 16 of the Terms for the notice-
and-action procedure under Article 16 of the Digital Services
Act.
7.2 Reporting security vulnerabilities. Report security
vulnerabilities to security@makrr.ai. Please give us a reasonable
window to fix before any public disclosure. We do not currently
run a public bug-bounty; we may offer rewards at our discretion.
7.3 Enforcement actions we may take. Where we reasonably
believe this Policy is breached, we may:
(a) remove, disable, restrict or downrank content;
(b) suspend an account, team, device, model deployment or
training job;
(c) refuse new uploads, orders or training runs;
(d) revoke device credentials;
(e) require you to cooperate with our investigation;
(f) disclose to law enforcement where legally required;
(g) terminate the Agreement.
We take the minimum action appropriate to the breach, but we
will not hesitate to act quickly where there is risk to people,
to other users, or to the Service. No refund is given for a
period of suspension attributable to your breach.
7.4 Appeals. A user affected by an enforcement action may
appeal in writing to legal@makrr.ai within six (6) months,
stating the reasons. Where required by the DSA we will provide a
Statement of Reasons and you may use an out-of-court dispute-
resolution body certified under Article 21 DSA.
7.5 Liability for breach is yours. Where you upload or use
content in breach of this Policy, liability rests with you.
Enforcement by us (including removal, reporting to authorities
or cooperation with regulators) does not reduce your liability
to third parties or waive our right to recover losses, fines or
legal costs we incur as a result of your breach, within the
limits of clause 14 of the Terms.
────────────────────────────────────────────────────────────────
SECTION 8. UPDATES
────────────────────────────────────────────────────────────────
We may update this Policy to reflect new risks, new laws, and
new use cases. Changes take effect on publication at
/legal/acceptable_use and, for changes that materially restrict
permitted use, no fewer than 14 days after notice to you.
Continued use after the effective date is acceptance.
────────────────────────────────────────────────────────────────
SECTION 9. CONTACT
────────────────────────────────────────────────────────────────
Abuse and takedowns: support@makrr.ai (subject: ABUSE REPORT)
Security: security@makrr.ai
AI Act / compliance: compliance@makrr.ai
Legal: legal@makrr.ai
Trashify Tech OÜ
Registry code: 16495334
Registered office: Gonsiori tn 29-3, Kesklinna linnaosa,
10147 Tallinn, Harju maakond, Estonia
════════════════════════════════════════════════════════════════
Version 1.0 · Effective 2026-04-17
════════════════════════════════════════════════════════════════