[ad_1]
OpenAI is rolling out new beta features for ChatGPT Plus members right now. Subscribers have reported that the update includes the ability to upload files and work with them, as well as multimodal support. Basically, users won’t have to select modes like Browse with Bing from the GPT-4 dropdown — it will instead guess what they want based on context.
The new features bring a dash of the office features offered by its ChatGPT Enterprise plan to the standalone individual chatbot subscription. I don’t seem to have the multimodal update on my own Plus plan, but I was able to test out the Advanced Data Analysis feature, which seems to work about as expected. Once the file is fed to ChatGPT, it takes a few moments to digest the file before it’s ready to work with it, and then the chatbot can do things like summarize data, answer questions, or generate data visualizations based on prompts.
The chatbot isn’t limited to just text files. On Threads, a user posted screenshots of a conversation wherein they uploaded an image of a capybara and asked ChatGPT to, through DALL-E 3, create a Pixar-style image based on it. They then iterated on the first image’s concept by uploading another image — this time of a wiggly skateboard — and asking it to insert that image. For some reason, it put a hat on it, too?
[ad_2]
Source link