During this talk, I will first briefly present some examples of our research activities in NLP at Lattice, resolving mainly around large language models and digital humanities. I will then focus on an ongoing project focusing on the analysis of large literary corpora from the 19th and 20th c. Models like Bert, Llama or Mistral are great, and NLP may sometimes look like a solved problem. However, a closer look reveals that these models are far from perfect on traditional tasks such as coreference resolution in long texts, or quotation attribution. This is particularly the case when working on challenging texts like novels, and even more when the target language is not English. I will detail our first results and some perspectives for the months to come. The conclusion will be that NLP is far from being dead, and that literature offers a more challenging playground that most benchmarks used for leaderboards.