Post
238
I used three posts to explain GPU/CPU and LLM performances, now finally circle back to my own model.๐
OneSQL needs GPU because it processes long prompt. It is not a chatbot which replies short prompts with long answers. I call models of my kind workhorse models.
We all have to scramble for GPUs to get adoption. Below are a few ways.
You can inherit it. If you have a new Mac machine. Congratulations, you have GPU.
You can leverage it. Get inference providers to adopt your model, then you switch from CapEx to OpEx.
Or you buy it. Go frugal. Find older GPUs with enough HBMs to house your model.
OneSQL needs GPU because it processes long prompt. It is not a chatbot which replies short prompts with long answers. I call models of my kind workhorse models.
We all have to scramble for GPUs to get adoption. Below are a few ways.
You can inherit it. If you have a new Mac machine. Congratulations, you have GPU.
You can leverage it. Get inference providers to adopt your model, then you switch from CapEx to OpEx.
Or you buy it. Go frugal. Find older GPUs with enough HBMs to house your model.