Apple’s Mail application is drawing scrutiny after its relatively new automatic email sorting function incorrectly identified phishing messages as “Important,” potentially misleading users by adding perceived legitimacy to malicious emails.
Phishing involves deceptive emails designed to trick recipients into revealing sensitive information or clicking harmful links. The problematic sorting behavior affects the categorization feature introduced initially in English with iOS 18.2/macOS 15.2 before a wider rollout with iOS 18.4 and macOS 15.4.
The AI powered sorting operates automatically once enabled in Mail settings, distinct from the broader Apple Intelligence suite that requires separate user activation on compatible devices. This misclassification pushes potentially harmful emails into a prioritized view, increasing the likelihood users might interact with them under the assumption they are genuinely significant communications.
The issue was brought to light by Swiss tech journalist Rafael Zeier, who demonstrated how a clear phishing email—falsely promising a financial “refund”—was prominently placed in his “Important” inbox within Apple Mail on his iPhone. Exacerbating the problem, the Mail app itself overlaid a label describing the scam attempt as a “Time-critical transaction,” accompanied by a shopping icon, adding artificial urgency.
The email itself displayed common phishing red flags, such as lacking specific sender company details and employing pressure tactics, ultimately linking to a fraudulent website masquerading as an insurance provider to harvest user data. Identifying such malicious links requires a long press gesture on the iPhone to preview the URL, unlike the simpler mouse hover preview available in macOS Mail.
AI Glitches Part of a Wider Pattern
This Mail categorization problem isn’t an isolated incident within Apple’s AI ecosystem. The company has faced several public challenges with its AI features recently. In January 2025, Apple Intelligence drew criticism for generating inaccurate notification summaries, including false sports results announced prematurely and misattributed statements regarding public figures. Major news organizations, notably the BBC whose reporting was misrepresented, expressed strong concerns.
A BBC spokesperson stated at the time, “It is essential that Apple fixes this problem urgently—as this has happened multiple times.” Echoing these worries, Reporters Without Borders warned that “The automated production of false information attributed to a media outlet is a blow to the outlet’s credibility.”
Following this specific controversy, Apple paused the AI news summary feature for news and entertainment apps in the subsequent iOS 18.3 beta release, announcing plans to improve accuracy and “further clarify when the text being displayed is summarization provided by Apple Intelligence.”
Development Hurdles and User Trust
Apple’s AI challenges haven’t been limited to text summarization. Users reported in December 2024 that Siri’s proactive features were creating phantom restaurant reservations in their calendars via OpenTable, apparently triggered simply by browsing related web pages in Safari.
These functional issues have occurred alongside broader developmental delays. The anticipated major AI overhaul for Siri has been repeatedly postponed, with a recent confirmation pushing the release into later this year, and some earlier reports suggesting a full rollout might not complete until 2026.
An Apple spokesperson acknowledged the setback, stating, “It’s going to take us longer than we thought to deliver on these features, and we anticipate rolling them out in the coming year.” This slower pace comes as competitors like Amazon and Google introduce more advanced AI assistant capabilities. Adding another layer to trust concerns, Apple agreed to a $95 million settlement in January 2025 related to allegations that Siri had recorded user conversations without explicit consent.
Improving AI While Balancing Privacy
The recurring problems underscore the complexities Apple faces in developing and deploying reliable AI features, particularly given its stated focus on user privacy and on-device processing, which can present different challenges compared to cloud-centric AI models.
Apple is concurrently working to obtain better training data for its AI models, an effort that itself has drawn scrutiny from privacy advocates. While robust spam filters implemented by email service providers should ideally prevent most phishing emails from reaching the user’s inbox, the Mail app’s AI appears susceptible to misinterpreting those that slip through.
For users concerned about the AI categorization potentially highlighting dangerous emails, the feature can be turned off entirely within Mail’s settings, or the traditional, chronologically sorted list view can be restored via the interface options.