Currently the answer would be “Have you tried compressing the data?” and “Do we really need all that data per client?”. Both of which boil down to “ask the engineers to fix it for you and then come back to me if you are a failure”
A coworker of mine built an LLM powered FUSE filesystem as a very tongue-in-check response to the concept of letting AI do everything. It let the LLM generate responses to listing files in directories and reading contents of the files.
Dev: “Boss, we need additional storage on the database cluster to handle the latest clients we signed up.”
Boss: “First see if AI can do it.”
Currently the answer would be “Have you tried compressing the data?” and “Do we really need all that data per client?”. Both of which boil down to “ask the engineers to fix it for you and then come back to me if you are a failure”
A coworker of mine built an LLM powered FUSE filesystem as a very tongue-in-check response to the concept of letting AI do everything. It let the LLM generate responses to listing files in directories and reading contents of the files.