Using LLMs to get to market faster – how one edge compute platform is taking advantage of the enormous amount of data collected to improve products and services
Providing market leading performance, security and uptime is always a challenge when deploying to a global audience, and even when deploying in the region there are tricks to being successful in the market.
But how does Fastly keep track of customer demand for new products and services?
How do we make sure we build what our customers need?
Focusing on our Customer Enhancement Requests, feedback from the field, and the enormous amount of data we collect from our network, Fastly is utilising AI to help make better decisions and make sense of the amount of data we collect daily.
Far too much for any one team to consume, come to this session to see some of these tools in action.
Learn how customers can utilise public tools to help you with configurations, and see an example of the Fastly semantic cache for OpenAI which can reduce the cost of queries by up to 20% just with a single line of code.