--- license: cc-by-4.0 library_name: saelens --- ⚠️ WARNING: We have not done extensive testing with Gemma 2 9B in external infrastructure. Please clearly report bugs. ⚠️ WARNING: We are in the process of uploading SAEs of many different sparsities for every (Layer, Width) pair. For now, there is only one sparsity per (Layer, Width) pair. # 1. Gemma Scope Gemma Scope is a comprehensive, open suite of sparse autoencoders for Gemma 2 9B and 2B. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals. See our [landing page](https://huggingface.co/google/gemma-scope) for details on the whole suite. This is a specific set of SAEs: # 2. What Is `gemma-scope-9b-pt-att`? - `gemma-scope-`: See 1. - `9b-pt-`: These SAEs were trained on Gemma v2 9B base model. - `att`: These SAEs were trained on the model's attention layer output before the linear projection. ## 3. Point of Contact Point of contact: Arthur Conmy Contact by email: ```python ''.join(list('moc.elgoog@ymnoc')[::-1]) ``` HuggingFace account: https://huggingface.co/ArthurConmyGDM