One of the issues that the average end users quickly encounter when using generative artificial intelligence (AI) is a sense of prompt fatigue. As amazing as generative AI can be, there are still a lot of tasks that are simply easier for humans to do themselves versus trying to come up with a series of prompts that would enable a generative AI platform to come up with the right answer.
However, with the arrival of Duet AI for Google Workspace it’s becoming clear that the need for most end users to master prompt engineering is going to be minimal. Google, at the Google Cloud Next, conference showed how Duet AI for Google Workspace will crawl through Google Drive to surface all the content related to a document that an end user is creating. It will then invoke the PaLM 2 large language model (LLM) to generate additional suggestions.
Aparna Pappu, general manager and vice president for Google Workspace, told conference attendees that the goal is to move away from a prompt-based interaction with Duet AI to one that is based on the context of the document being created.
Duet AI for Google Workspace achieves that goal in a way that doesn’t result in any of the data being created via those interactions being used to train an AI model, she adds.
Google may be playing catch up when it comes to generative AI, but it’s apparent that it is focusing on how to seamlessly embed generative AI into applications. For example, a note taking capability in the form of Duet AI in the Meet application will not only automatically take notes, but adjust lighting and sound, detect faces and add tiles and captions in multiple languages. There will even come a day when people will use Duet AI in Meet to attend meetings in their place, notes Pappu.
Ultimately, the goal is to make generative AI a pervasive feature of every application. Rather than having to employ a separate platform that is invoked via a series of prompts, end users will simply know that generative AI is going to be a continual background presence. In fact, in time, just about every application and platform is going to similarly follow suit, resulting in the number of prompts that anyone needs to master effectively using generative AI dropping to near zero. The day when the average end user starts to take note of generative AI is already not that far off.
In the meantime, IT leaders would be well-advised to assess how generative AI will actually be employed across their application portfolio. There are lots of people training to become prompt engineers on the assumption that the first generation of generative AI represents the way these innovations will be consumed. In reality, the pace of AI innovation is already accelerating faster than initially anticipated. The truth is there will be no escaping it. The only issue that remains to be determined now is to what degree to depend on it.