The incident follows Meloni's 2024 lawsuit against two men who created deepfake pornography using her likeness and posted it to adult websites. It also reflects a documented epidemic: approximately 90 percent of non-consensual AI-generated sexual imagery depicts women. The Italian government has prioritized AI regulation following multiple scandals involving doctored images of prominent Italian women. Tech platforms including X have faced scrutiny—the platform's Grock tool generated an estimated 3 million sexualized images between December 2025 and January 2026. Italy has strengthened its AI laws to include prison terms for creators of harmful deepfakes.
For attorneys, the incident underscores the inadequacy of current platform safeguards and education-focused responses. Meloni's high-profile reposting highlights both the scale of industrial digital exploitation targeting women and the gap between existing legal frameworks and the speed of synthetic media creation. Experts argue that cryptographic hardware authentication and aggressive legal enforcement—not awareness campaigns alone—are necessary to address the threat. Practitioners should monitor whether Italy's regulatory approach becomes a model for other jurisdictions, and whether platforms face liability for enabling the tools that generate such imagery at scale.