Traditional GAN:s have problems when the data is in discrete format since there are no derivata between true and false values. So how can GAN:s be able to generate text? Some sources claim that the text should be preprocessed in a transformer model, but are the values not still discrete?
You are correct.

To overcome this issue, researchers have developed a variant of GANs called Sequence GANs that are specifically designed for generating discrete data, such as text. In these models, the generator and discriminator are trained on sequences of tokens (e.g., words or characters) instead of continuous data.

Text generation with GANs often involves preprocessing the text into a numerical representation, such as a one-hot encoding or a word embedding, to convert the discrete data into a continuous representation that can be used as input to the GAN.

The GAN architecture may also be augmented with additional components, such as a recurrent neural network (RNN) or a transformer, to capture the underlying structure in the text data. These models can be trained to generate sequences of text tokens that are conditioned on a given prompt or conditioned on the previous tokens generated by the model.

It's important to note that while GANs have shown promising results in generating text, they can still be challenging to train due to the instability of the GAN training process. Additionally, the generated text may still contain grammatical errors, irrelevant or nonsensical content, or lack coherence and cohesiveness compared to human-written text.
Is there still ongoing research with GAN-based text generators, or have they been phased out? What would the advantages of a GAN be over traditional transformers?