best practices for High CPU Usage by NSLinguisticTagger on macOS 13.6 When Processing Large Text?
After trying multiple solutions online, I still can't figure this out. I'm experiencing unusually high CPU usage when using `NSLinguisticTagger` to process large bodies of text on macOS 13.6. Specifically, I'm trying to extract named entities from a document of about 10,000 words, and the tagging process takes a important amount of time and resources. My implementation is as follows: ```swift let text = "[Your long text here]" let tagger = NSLinguisticTagger(tagSchemes: [.nameType], options: 0) tagger.string = text let range = NSRange(location: 0, length: text.utf16.count) tagger.enumerateTags(in: range, unit: .word, scheme: .nameType, options: [.omitWhitespace, .omitPunctuation]) { (tag, tokenRange, _, _) in if let tag = tag { let word = (text as NSString).substring(with: tokenRange) print("\(word): \(tag)") } } ``` While this code works, I'm seeing CPU usage spike to 90-100% during the execution. I tried reducing the text size by processing it in chunks, but it still leads to high CPU usage per chunk. Additionally, I experimented with different `NSLinguisticTagger` options, such as `omitWhitespace` and `omitPunctuation`, to see if they would help with performance, but the scenario continues. I also checked the debug logs, and I noticed messages about `NSLinguisticTagger` using multiple threads, which I suspect might be contributing to the load. Is there a more efficient way to handle large text processing with `NSLinguisticTagger` on macOS? Any suggestions on best practices or alternative approaches would be greatly appreciated. Has anyone else encountered this?