It seems to early to put into serious applications…
Six current and former FDA officials who spoke on the condition of anonymity to discuss sensitive internal work told CNN that Elsa can be useful for generating meeting notes and summaries, or email and communique templates.
But it has also made up nonexistent studies, known as AI “hallucinating,” or misrepresented research, according to three current FDA employees and documents seen by CNN. This makes it unreliable for their most critical work, the employees said.
“Anything that you don’t have time to double-check is unreliable. It hallucinates confidently,” said one employee — a far cry from what has been publicly promised.
“AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have” to check for fake or misrepresented studies, a second FDA employee said.
I think they might find chatGPT better than their specific tool.