All our current web-llm model options crash chrome on my iPhone 15 - [ ] support different model options based on ram calculation - [ ] Verify logic on devices - [ ] (Maybe) dynamically unload loaded models based on total model load - [ ] Research that actually helps and is appropriate
All our current web-llm model options crash chrome on my iPhone 15