Enhancing Paragraph Generation with a Latent Language Diffusion Model


In the fast-evolving world of natural language processing (NLP), there is a strong demand for generating coherent and controlled text, as referenced in the work Toward Controlled Generation of Text. Traditional autoregressive models such as GPT, which have long been the industry standard, possess inherent limitations that sometimes manifest as repetitive and low-quality outputs, as seen in the work The Curious Case of Neural Text Degeneration. This is primarily due to a phenomenon known as “exposure bias,” as seen in the work Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks. This imperfection arises due to a mismatch between how these models are trained and their actual use during inference, often leading to error accumulation during text generation.

To address these challenges, we wanted to call attention to a latent text diffusion model that we introduced in the fall of 2023. The model synergizes non-autoregressive latent semantic diffusion with autoregressive generation to overcome the hurdles faced by its predecessors. Specifically, we hope to conduct research to improve the experience of users who benefit from more diversified and controlled text generation. By adopting a latent diffusion approach (as discussed in High-Resolution Image Synthesis with Latent Diffusion Models and Latent Diffusion for Language Generation, PLANNER mitigates computational expenses typically associated with similar models, while simultaneously delivering superior diversity and cohesiveness, and reduce the repetition level of generated text, particularly in longer blocks of text and paragraphs, which have traditionally posed a challenge for text generation models.

Our model, PLANNER, extends its benefit to various text generation tasks such as semantic generation, text completion, and summarization, with extensive evaluations of fluency, diversity, and repetition mitigation.

Figure 1: A three-stage model for text generation. We begin with a variational paragraph embedder in stage 1 and evolve the coarse text through our latent diffusion model, PLANNER, for a finer coherent result in stage 3.

In stage 1 of Figure 1, a variational paragraph embedder encodes paragraphs into a series of latent codes. The encoder E and decoder D construct a bidirectional mapping between the discrete data space and the latent code space. The paragraph embeddings z are extracted by taking the first k hidden state vectors of dimension h from the final layer of E, which are fed into the initial steps of the decoder, which is trained to reconstruct the original text x. BOS and EOS represent “beginning of sentence” and “end of sentence” tokens, respectively.

In stage 2 of Figure 1, these latent codes z are processed by a transformer-based latent diffusion model (as discussed in the work Scalable Diffusion Models with Transformers) for training, so that it can generate new latent codes over time during inference time, simulating the evolution of text from coarse to fine. Finally, in stage 3 the decoder D translates these evolving latent codes into coherent text.

Our PLANNER latent diffusion model considers the conditioning signal as raw text, such as preceding context or the document to be summarized. We applied a conditional feature encoder τ to the input and used the hidden states at the last layer as y. We fed y and the time embedding t into the latent diffusion model through two channels, namely cross-attention and adaptive layer normalization. The aim of our research is to use existing text samples, such as an email or a summary of a document, to help generate longer texts that are both cohesive and readable. Examples in the following two figures are taken from a public dataset of text samples related to hotel reviews.

Figure 2: Compare the fine-tuned GPT-2 large model (the most relevant model at the time of research) results in column at the left with the PLANNER results at the right when generating text from a repetitive prompt (shown as “Prefix” in the figure). On the left, the GPT-2 model, despite using top-p sampling, still yields text with self-reinforced repetition. On the right, data from 512 generation roll-outs illustrate that the new method produces a wider variety of first 1-grams, showcasing its ability to generate more diversified text unaffected by the poorly devised prompt.

Figure 2 compares two language models: a fine-tuned GPT-2 large model and our method. It showcases how each model handles a prompt designed to evaluate their ability to generate diversified text from a repetitive cue. We decided to select GPT-2 because it was the most relevant model at the time of conducting research. Starting with the fine-tuned GPT-2 large model, this model has been initialized using GPT-2 large, which has 774 million parameters. As for publicly available versions of GPT-2, OpenAI has released different sizes of GPT-2 models, including a large version that is accessible for researchers and developers. However, the particular fine-tuned version we used in our paper, PLANNER: Generating Diversified Paragraph via Latent Language Diffusion Model, may include proprietary dataset adjustments and may not be directly available.

  • FT stands for fine-tuning, which is the process of taking a pre-trained model and training it further on a new dataset to specialize its knowledge.
  • Greedy decoding is a method where, at each step in generating text, the model picks the word with the highest probability.
  • Top-p sampling is a technique where the model chooses from the top p percent of probable words, allowing for more randomness and potential creativity in its output, as addressed in the work The Curious Case of Neural Text Degeneration
  • 512 generation rollouts refers to the number of times the model generates text to test its capabilities. In this context, it means the model was used to generate text, starting from the prompt, 512 times for evaluation.
  • N-grams are sequences of N tokens.

The percentage numbers in the n-gram columns indicate the frequency of each n-gram’s appearance within the generated text by a specific method. A lower maximum percentage suggests that there is a larger variety of different n-grams, which is typically seen as desirable for the generation of text that is less repetitive and more diverse.

“More diversified” implies that the generated sequences of words (n-grams) are more varied and less repetitive compared to the repetitive n-grams generated by other methods or models. This diversification generally indicates a higher quality of text generation that is more likely to generate useful and novel content for users.

Lastly, we observed accumulative errors in traditional autoregressive models, such as the ones in GPT-2, where the model gets stuck in a loop and produces repetitive or unhelpful output. In the context given, the repeated phrase “awful hotel” in the generated text from GPT-2 is an example of such an accumulative error.

Figure 3: This hotel review text generated by a diffusion model progresses over 10 steps, from a vague to a more distinct and richly detailed positive sentiment about the hotel experience. This development follows a coarse-to-fine approach, starting from general commendation and culminating in a vibrant and specific final review that praises the bartender and the establishment’s ambiance and amenities.

Figure 3 illustrates the gradual evolution of generated text over a series of 10 steps. The model begins with coarse initial predictions (represented in Figure 3 as step 1, the initial state) and progresses by performing repeated processing steps to denoise and improve the text.

The reader should envision this scenario not as a snapshot of text being entered or prompted by an iPhone user but as a systematic process by which a language model refines an initially vague or broad expression into a more detailed and specific review text. At step 1, the text is a rough suggestion of what the user might want to express — it is terse and lacks detail. As time progresses, the model fine-tunes the text, introducing more specific descriptions, sentiment, and sophisticated language. By step 10, the end state, the generated text resembles a thoughtfully composed review that one might expect from an experienced reviewer who gives particular attention to various aspects of their hotel stay.

Thus, Figure 3 shows how the PLANNER model’s generation progresses from coarse to fine, giving readers a step-by-step visualization of how the text is iteratively enhanced to improve readability, specificity, and overall quality. The scenario starts with a minimal outline of positive sentiment and, over time, develops into a fleshed-out testimonial with vivid details emerging at each subsequent step.

Conclusion

The PLANNER model represents an advancement in the pursuit of improved natural language. Tackling the challenge of accumulative errors in traditional autoregressive models, our model leverages latent semantic diffusion to generate text that’s fluent, controlled, and diversified.

Acknowledgments

Many people contributed to this work, including Richard Bai, Ronan Collobert, Zhe Gan, David Grangier, Edouard Grave, Tatiana Likhomanenko, Barry Theobald, Yinfei Yang, and Yizhe Zhang.

Apple Resources

Xu, Jin, Xiaojiang Liu, Jianhao Yan, Deng Cai, Huayang Li, and Jian Li. 2022. “Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation.” [link.]

Zhang, Yizhe, Jiatao Gu, Zhuofeng Wu, Shuangfei Zhai, Josh Susskind, and Navdeep Jaitly. 2023. “PLANNER: Generating Diversified Paragraph via Latent Language Diffusion Model.” [link.]

External References

Bengio, Samy, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. “Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks.” [link.]

Holtzman, Ari, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. “The Curious Case of Neural Text Degeneration.” [link.]

Hu, Zhiting, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. “Toward Controlled Generation of Text.” [link.]

Keskar, Nitish Shirish, Bryan McCann, Lav R. Varshney, Caiming Xiong, and Richard Socher. 2019. “CTRL: A Conditional Transformer Language Model for Controllable Generation.” [link.]

Lovelace, Justin, Varsha Kishore, Chao Wan, Eliot Shekhtman, and Kilian Q. Weinberger. 2023. “Latent Diffusion for Language Generation.” [link.]](https://doi.org/10.48550/arXiv.2212.09462)

Peebles, William, and Saining Xie. 2022. “Scalable Diffusion Models with Transformers.” [link.]

Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. “High-Resolution Image Synthesis with Latent Diffusion Models.” [link.]



Source link