- https://obrienlabs.medium.com/running-the-larger-google-gemma-7b-35gb-llm-for-7x-inference-performance-gain-8b63019523bb
- https://obrienlabs.medium.com/google-gemma-7b-and-2b-llm-models-are-now-available-to-developers-as-oss-on-hugging-face-737f65688f0d
- https://obrienlabs.medium.com/running-the-70b-llama-2-llm-locally-on-metal-via-llama-cpp-on-mac-studio-m2-ultra-32b3179e9cbe
-
wiki Extraction)
-
CUDA based - High Performance Computing - LLM Training - Ground to GCP Cloud Hybrid
-
Google Cloud Earth Engine (formerly Keyhole) - HPC Integration
-
Calling Google Cloud APIs privately from on prem using Private Service Connect
-
Machine Learning on Local or Cloud based NVidia or Apple GPUs
-
70B LLaMA 2 LLM local inference on metal via llama.cpp on Mac Studio M2 Ultra - LinkedIn - 70B LLaMA 2 LLM local inference on metal via llama.cpp on Mac Studio M2 Ultra - Medium
- Google Project Gemini - https://blog.google/technology/ai/google-io-2023-keynote-sundar-pichai/#ai-products is multimodal https://www.techopedia.com/definition/multimodal-ai-multimodal-artificial-intelligence - https://medium.com/@bedros-p/gemini-is-coming-to-makersuite-so-are-stubbs-32248f3924aa - https://developers.googleblog.com/2023/09/make-with-makersuite-part1-introduction.html
- https://blog.google/technology/ai/google-gemini-ai/#performance