an exploration of OpenAI’s GPT language model(s) for Biomedical Informatics
Telling someone that they have or do not have a certain health condition, or providing instructions on how to cure or treat a health condition
OpenAI’s models are not fine-tuned to provide medical information. You should never use our models to provide diagnostic or treatment services for serious medical conditions.
OpenAI’s platforms should not be used to triage or manage life-threatening issues that need immediate attention.
Consumer-facing uses of our models in medical, financial, and legal industries; in news generation or news summarization; and where else warranted, must provide a disclaimer to users informing them that AI is being used and of its potential limitations.
c.f. Usage Policies
GPT-4 is a large multimodal model (accepting text inputs and emitting text outputs today, with image inputs coming in the future) that can solve difficult problems with greater accuracy than any of our previous models, thanks to its broader general knowledge and advanced reasoning capabilities. Like gpt-3.5-turbo, GPT-4 is optimized for chat but works well for traditional completions tasks. Learn how to use GPT-4 in our chat guide.
GPT-3.5 models can understand and generate natural language or code. Our most capable and cost effective model in the GPT-3.5 family is gpt-3.5-turbo which has been optimized for chat but works well for traditional completions tasks as well.
- Documentation
- [OpenAI Node.js Library] (https://www.npmjs.com/package/openai)
Thank you for joining the waitlist to build with GPT-4!
To balance capacity with demand, we'll be sending invites gradually over time.
While we ramp up, invites will be prioritized to developers who have previously build with the OpenAI API. You can also gain priority access if you contribute model evaluations to OpenAI Evals that get merged, as this will help us improve the models for everyone.
Once you’re off the waitlist, you’ll receive an email with further instructions on how to get started. We will process requests for the 8K and 32K models at different rates based on capacity, so you can expect to receive 8K access first.
We appreciate your interest, and look forward to having you build with GPT-4 soon. In the meantime, we suggest getting started with gpt-3.5-turbo, the model powering ChatGPT.
– The OpenAI Team
P.S. You can also preview GPT-4 on chat.openai.com if you’re a ChatGPT Plus subscriber. We expect to be severely capacity constrained, so there will be a usage cap for the model depending on demand and system performance.