This commit is contained in:
Timothy Jaeryang Baek 2025-01-31 01:34:19 -08:00
parent 61fad2681b
commit 5360cb5d50
59 changed files with 200 additions and 31 deletions

View File

@ -12,7 +12,7 @@ Folders allow you to group related conversations together for quick access and b
- **Creating a Folder**: You can create a new folder to store specific conversations. This is useful if you want to keep conversations of a similar topic or purpose together. - **Creating a Folder**: You can create a new folder to store specific conversations. This is useful if you want to keep conversations of a similar topic or purpose together.
- **Moving Conversations into Folders**: Conversations can be moved into folders by dragging and dropping them. This allows you to structure your workspace in a way that suits your workflow. - **Moving Conversations into Folders**: Conversations can be moved into folders by dragging and dropping them. This allows you to structure your workspace in a way that suits your workflow.
![Folder Demo](/img/folder-demo.gif) ![Folder Demo](/images/folder-demo.gif)
### Example Use Case ### Example Use Case
@ -25,7 +25,7 @@ If you are managing multiple projects, you can create separate folders for each
Tags provide an additional layer of organization by allowing you to label conversations with keywords or phrases. Tags provide an additional layer of organization by allowing you to label conversations with keywords or phrases.
- **Adding Tags to Conversations**: Tags can be applied to conversations based on their content or purpose. Tags are flexible and can be added or removed as needed. - **Adding Tags to Conversations**: Tags can be applied to conversations based on their content or purpose. Tags are flexible and can be added or removed as needed.
![Tag Demo](/img/tag-demo.gif) ![Tag Demo](/images/tag-demo.gif)
- **Using Tags for Searching**: Tags make it easy to locate specific conversations by using the search feature. You can filter conversations by tags to quickly find those related to specific topics. - **Using Tags for Searching**: Tags make it easy to locate specific conversations by using the search feature. You can filter conversations by tags to quickly find those related to specific topics.
### Example Use Case ### Example Use Case

View File

@ -59,11 +59,11 @@ For your feedback to affect the leaderboard, you need whats called a **siblin
Heres a sneak peek at how the Arena Model interface works: Heres a sneak peek at how the Arena Model interface works:
![Arena Model Example](/img/evaluation/arena.png) ![Arena Model Example](/images/evaluation/arena.png)
Need more depth? You can even replicate a [**Chatbot Arena**](https://lmarena.ai/)-style setup! Need more depth? You can even replicate a [**Chatbot Arena**](https://lmarena.ai/)-style setup!
![Chatbot Arena Example](/img/evaluation/arena-many.png) ![Chatbot Arena Example](/images/evaluation/arena-many.png)
### **2. Normal Interaction** ### **2. Normal Interaction**
@ -71,11 +71,11 @@ No need to switch to “arena mode” if you don't want to. You can use Open Web
For instance, this is how you can rate during a normal interaction: For instance, this is how you can rate during a normal interaction:
![Normal Model Rating Interface](/img/evaluation/normal.png) ![Normal Model Rating Interface](/images/evaluation/normal.png)
And here's an example of setting up a multi-model comparison, similar to an arena: And here's an example of setting up a multi-model comparison, similar to an arena:
![Multi-Model Comparison](/img/evaluation/normal-many.png) ![Multi-Model Comparison](/images/evaluation/normal-many.png)
--- ---
@ -85,7 +85,7 @@ After rating, check out the **Leaderboard** under the Admin Panel. This is where
This is a sample leaderboard layout: This is a sample leaderboard layout:
![Leaderboard Example](/img/evaluation/leaderboard.png) ![Leaderboard Example](/images/evaluation/leaderboard.png)
### Topic-Based Reranking ### Topic-Based Reranking
@ -100,7 +100,7 @@ Don't skip this! Tagging is super powerful because it allows you to **re-rank mo
Heres an example of how re-ranking looks: Heres an example of how re-ranking looks:
![Reranking Leaderboard by Topic](/img/evaluation/leaderboard-reranked.png) ![Reranking Leaderboard by Topic](/images/evaluation/leaderboard-reranked.png)
--- ---

View File

@ -14,7 +14,7 @@ An example of a graph visualization Action can be seen in the video below.
<p align="center"> <p align="center">
<a href="#"> <a href="#">
<img src="/img/pipelines/graph-viz-action.gif" alt="Graph Visualization Action" /> <img src="/images/pipelines/graph-viz-action.gif" alt="Graph Visualization Action" />
</a> </a>
</p> </p>

View File

@ -27,9 +27,9 @@ To manage your Ollama instance in Open WebUI, follow these steps:
Heres what the management screen looks like: Heres what the management screen looks like:
![Ollama Management Screen](/img/getting-started/quick-start/manage-ollama.png) ![Ollama Management Screen](/images/getting-started/quick-start/manage-ollama.png)
![Ollama Management Screen](/img/getting-started/quick-start/manage-modal-ollama.png) ![Ollama Management Screen](/images/getting-started/quick-start/manage-modal-ollama.png)
## A Quick and Efficient Way to Download Models ## A Quick and Efficient Way to Download Models
@ -38,7 +38,7 @@ If youre looking for a faster option to get started, you can download models
Heres an example of how it works: Heres an example of how it works:
![Ollama Download Prompt](/img/getting-started/quick-start/selector-ollama.png) ![Ollama Download Prompt](/images/getting-started/quick-start/selector-ollama.png)
This method is perfect if you want to skip navigating through the Admin Settings menu and get right to using your models. This method is perfect if you want to skip navigating through the Admin Settings menu and get right to using your models.

View File

@ -24,7 +24,7 @@ import { SponsorList } from "@site/src/components/SponsorList";
[![Discord](https://img.shields.io/badge/Discord-Open_WebUI-blue?logo=discord&logoColor=white)](https://discord.gg/5rJgQTnV4s) [![Discord](https://img.shields.io/badge/Discord-Open_WebUI-blue?logo=discord&logoColor=white)](https://discord.gg/5rJgQTnV4s)
[![](https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86)](https://github.com/sponsors/tjbck) [![](https://img.shields.io/static/v1?label=Sponsor&message=%E2%9D%A4&logo=GitHub&color=%23fe8e86)](https://github.com/sponsors/tjbck)
![Open WebUI Demo](/img/demo.gif) ![Open WebUI Demo](/images/demo.gif)
## Quick Start with Docker 🐳 ## Quick Start with Docker 🐳

View File

@ -9,7 +9,7 @@ Filters are used to perform actions against incoming user messages and outgoing
<p align="center"> <p align="center">
<a href="#"> <a href="#">
<img src="/img/pipelines/filters.png" alt="Filter Workflow" /> <img src="/images/pipelines/filters.png" alt="Filter Workflow" />
</a> </a>
</p> </p>

View File

@ -5,7 +5,7 @@ title: "⚡ Pipelines"
<p align="center"> <p align="center">
<a href="#"> <a href="#">
<img src="/img/pipelines/header.png" alt="Pipelines Logo" /> <img src="/images/pipelines/header.png" alt="Pipelines Logo" />
</a> </a>
</p> </p>
@ -37,7 +37,7 @@ Welcome to **Pipelines**, an [Open WebUI](https://github.com/open-webui) initiat
<p align="center"> <p align="center">
<a href="#"> <a href="#">
<img src="/img/pipelines/workflow.png" alt="Pipelines Workflow" /> <img src="/images/pipelines/workflow.png" alt="Pipelines Workflow" />
</a> </a>
</p> </p>

View File

@ -8,7 +8,7 @@ Pipes are functions that can be used to perform actions prior to returning LLM m
<p align="center"> <p align="center">
<a href="#"> <a href="#">
<img src="/img/pipelines/pipes.png" alt="Pipe Workflow" /> <img src="/images/pipelines/pipes.png" alt="Pipe Workflow" />
</a> </a>
</p> </p>
@ -16,6 +16,6 @@ Pipes that are defined in your WebUI show up as a new model with an "External" d
<p align="center"> <p align="center">
<a href="#"> <a href="#">
<img src="/img/pipelines/pipe-model-example.png" alt="Pipe Models in WebUI" /> <img src="/images/pipelines/pipe-model-example.png" alt="Pipe Models in WebUI" />
</a> </a>
</p> </p>

View File

@ -160,7 +160,7 @@ Using Azure OpenAI Dall-E directly is unsupported, but you can [set up a LiteLLM
## Using Image Generation ## Using Image Generation
![Image Generation Tutorial](/img/tutorial_image_generation.png) ![Image Generation Tutorial](/images/tutorial_image_generation.png)
1. First, use a text generation model to write a prompt for image generation. 1. First, use a text generation model to write a prompt for image generation.
2. After the response has finished, you can click the Picture icon to generate an image. 2. After the response has finished, you can click the Picture icon to generate an image.

View File

@ -0,0 +1,169 @@
---
sidebar_position: 1
title: "🦥 Run DeepSeek R1 Dynamic 1.58-bit with Llama.cpp"
---
A huge shoutout to **UnslothAI** for their incredible efforts! Thanks to their hard work, we can now run the **full DeepSeek-R1** 671B parameter model in its dynamic 1.58-bit quantized form (compressed to just 131GB) on **Llama.cpp**! And the best part? You no longer have to despair about needing massive enterprise-class GPUs or servers — its possible to run this model on your personal machine (albeit slowly for most consumer hardware).
:::note
The only true **DeepSeek-R1** model on Ollama is the **671B version** available here: [https://ollama.com/library/deepseek-r1:671b](https://ollama.com/library/deepseek-r1:671b). Other versions are **distilled** models.
:::
This guide focuses on running the **full DeepSeek-R1 Dynamic 1.58-bit quantized model** using **Llama.cpp** integrated with **Open WebUI**. For this tutorial, well demonstrate the steps with an **M4 Max + 128GB RAM** machine. You can adapt the settings to your own configuration.
---
## Step 1: Install Llama.cpp
You can either:
- [Download the prebuilt binaries](https://github.com/ggerganov/llama.cpp/releases)
- **Or build it yourself**: Follow the instructions here: [Llama.cpp Build Guide](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md)
## Step 2: Download the Model Provided by UnslothAI
Head over to [Unsloths Hugging Face page](https://huggingface.co/unsloth/DeepSeek-R1-GGUF) and download the appropriate **dynamic quantized version** of DeepSeek-R1. For this tutorial, well use the **1.58-bit (131GB)** version, which is highly optimized yet remains surprisingly functional.
:::tip
Know your "working directory" — where your Python script or terminal session is running. The model files will download to a subfolder of that directory by default, so be sure you know its path! For example, if you're running the command below in `/Users/yourname/Documents/projects`, your downloaded model will be saved under `/Users/yourname/Documents/projects/DeepSeek-R1-GGUF`.
:::
To understand more about UnslothAIs development process and why these dynamic quantized versions are so efficient, check out their blog post: [UnslothAI DeepSeek R1 Dynamic Quantization](https://unsloth.ai/blog/deepseekr1-dynamic).
Heres how to download the model programmatically:
```python
# Install Hugging Face dependencies before running this:
# pip install huggingface_hub hf_transfer
from huggingface_hub import snapshot_download
snapshot_download(
repo_id = "unsloth/DeepSeek-R1-GGUF", # Specify the Hugging Face repo
local_dir = "DeepSeek-R1-GGUF", # Model will download into this directory
allow_patterns = ["*UD-IQ1_S*"], # Only download the 1.58-bit version
)
```
Once the download completes, youll find the model files in a directory structure like this:
```
DeepSeek-R1-GGUF/
├── DeepSeek-R1-UD-IQ1_S/
│ ├── DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf
│ ├── DeepSeek-R1-UD-IQ1_S-00002-of-00003.gguf
│ ├── DeepSeek-R1-UD-IQ1_S-00003-of-00003.gguf
```
:::info
🛠️ Update paths in the later steps to **match your specific directory structure**. For example, if your script was in `/Users/tim/Downloads`, the full path to the GGUF file would be:
`/Users/tim/Downloads/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf`.
:::
## Step 3: Make Sure Open WebUI is Installed and Running
If you dont already have **Open WebUI** installed, no worries! Its a simple setup. Just follow the [Open WebUI documentation here](https://docs.openwebui.com/). Once installed, start the application — well connect it in a later step to interact with the DeepSeek-R1 model.
## Step 4: Serve the Model Using Llama.cpp
Now that the model is downloaded, the next step is to run it using **Llama.cpps server mode**. Before you begin:
1. **Locate the `llama-server` binary.**
If you built from source (as outlined in Step 1), the `llama-server` executable will be located in `llama.cpp/build/bin`. Navigate to this directory by using the `cd` command:
```bash
cd [path-to-llama-cpp]/llama.cpp/build/bin
```
Replace `[path-to-llama-cpp]` with the location where you cloned or built Llama.cpp. For example:
```bash
cd ~/Documents/workspace/llama.cpp/build/bin
```
2. **Point to your model folder.**
Use the full path to the downloaded GGUF files created in Step 2. When serving the model, specify the first part of the split GGUF files (e.g., `DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf`).
Heres the command to start the server:
```bash
./llama-server \
--model /[your-directory]/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
--port 10000 \
--ctx-size 1024 \
--n-gpu-layers 40
```
> 🔑 **Parameters to Customize Based on Your Machine:**
> - **`--model`:** Replace `/[your-directory]/` with the path where the GGUF files were downloaded in Step 2.
> - **`--port`:** The server default is `8080`, but feel free to change it based on your port availability.
> - **`--ctx-size`:** Determines context length (number of tokens). You can increase it if your hardware allows, but be cautious of rising RAM/VRAM usage.
> - **`--n-gpu-layers`:** Set the number of layers you want to offload to your GPU for faster inference. The exact number depends on your GPUs memory capacity — reference Unsloths table for specific recommendations. For CPU-only setups, set it to `0`.
For example, if your model was downloaded to `/Users/tim/Documents/workspace` and you have an RTX 4090 GPU with 24GB VRAM, your command would look like this:
```bash
./llama-server \
--model /Users/tim/Documents/workspace/DeepSeek-R1-GGUF/DeepSeek-R1-UD-IQ1_S/DeepSeek-R1-UD-IQ1_S-00001-of-00003.gguf \
--port 10000 \
--ctx-size 1024 \
--n-gpu-layers 40
```
Once the server starts, it will host a **local OpenAI-compatible API** endpoint at:
```
http://127.0.0.1:10000
```
:::info
🖥️ **Llama.cpp Server Running**
![Server Screenshot](/images/tutorials/deepseek/serve.png)
After running the command, you should see a message confirming the server is active and listening on port 10000.
:::
Be sure to **keep this terminal session running**, as it serves the model for all subsequent steps.
## Step 5: Connect Llama.cpp to Open WebUI
1. Go to **Admin Settings** in Open WebUI.
2. Navigate to **Connections > OpenAI Connections.**
3. Add the following details for the new connection:
- URL: `http://127.0.0.1:10000/v1`
- API Key: `none`
:::info
🖥️ **Adding Connection in Open WebUI**
![Connection Screenshot](/images/tutorials/deepseek/connection.png)
After running the command, you should see a message confirming the server is active and listening on port 10000.
:::
Once the connection is saved, you can start querying **DeepSeek-R1** directly from Open WebUI! 🎉
---
## Example: Generating Responses
You can now use Open WebUIs chat interface to interact with the **DeepSeek-R1 Dynamic 1.58-bit model**.
:::info
🖥️ **DeepSeek-R1 Response in Open WebUI**
![Response Screenshot](/images/tutorials/deepseek/response.png)
:::
---
## Notes and Considerations
- **Performance:**
Running a massive 131GB model like DeepSeek-R1 on personal hardware will be **slow**. Even with our M4 Max (128GB RAM), inference speeds were modest. But the fact that it works at all is a testament to UnslothAIs optimizations.
- **VRAM/Memory Requirements:**
Ensure sufficient VRAM and system RAM for optimal performance. With low-end GPUs or CPU-only setups, expect slower speeds (but its still doable!).
---
Thanks to **UnslothAI** and **Llama.cpp**, running one of the largest open-source reasoning models, **DeepSeek-R1** (1.58-bit version), is finally accessible to individuals. While its challenging to run such models on consumer hardware, the ability to do so without massive computational infrastructure is a significant technological milestone.
⭐ Big thanks to the community for pushing the boundaries of open AI research.
Happy experimenting! 🚀

View File

@ -60,7 +60,7 @@ b. **Modify `docusaurus.config.ts` to Use Environment Variables**
const config: Config = { const config: Config = {
title: "Open WebUI", title: "Open WebUI",
tagline: "ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI)", tagline: "ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI)",
favicon: "img/favicon.png", favicon: "images/favicon.png",
url: process.env.SITE_URL || "https://openwebui.com", url: process.env.SITE_URL || "https://openwebui.com",
baseUrl: process.env.BASE_URL || "/", baseUrl: process.env.BASE_URL || "/",
... ...

View File

@ -348,7 +348,7 @@ docker exec -it open-webui curl http://host.docker.internal:8080/search?q=this+i
5. Adjust the `Search Result Count` and `Concurrent Requests` values accordingly 5. Adjust the `Search Result Count` and `Concurrent Requests` values accordingly
6. Save changes 6. Save changes
![SearXNG GUI Configuration](/img/tutorial_searxng_config.png) ![SearXNG GUI Configuration](/images/tutorial_searxng_config.png)
## 5. Using Web Search in a Chat ## 5. Using Web Search in a Chat
@ -356,7 +356,7 @@ To access Web Search, Click on the + next to the message input field.
Here you can toggle Web Search On/Off. Here you can toggle Web Search On/Off.
![Web Search UI Toggle](/img/web_search_toggle.png) ![Web Search UI Toggle](/images/web_search_toggle.png)
By following these steps, you will have successfully set up SearXNG with Open WebUI, enabling you to perform web searches using the SearXNG engine. By following these steps, you will have successfully set up SearXNG with Open WebUI, enabling you to perform web searches using the SearXNG engine.
@ -379,14 +379,14 @@ This is enabled on a per session basis eg. reloading the page, changing to anoth
7. Fill `Google PSE API Key` with the `API key` and `Google PSE Engine Id` (# 4) 7. Fill `Google PSE API Key` with the `API key` and `Google PSE Engine Id` (# 4)
8. Click `Save` 8. Click `Save`
![Open WebUI Admin panel](/img/tutorial_google_pse1.png) ![Open WebUI Admin panel](/images/tutorial_google_pse1.png)
#### Note #### Note
You have to enable `Web search` in the prompt field, using plus (`+`) button. You have to enable `Web search` in the prompt field, using plus (`+`) button.
Search the web ;-) Search the web ;-)
![enable Web search](/img/tutorial_google_pse2.png) ![enable Web search](/images/tutorial_google_pse2.png)
## Brave API ## Brave API
@ -444,13 +444,13 @@ services:
6. [Optional] Enter the `SearchApi engine` name you want to query. Example, `google`, `bing`, `baidu`, `google_news`, `bing_news`, `google_videos`, `google_scholar` and `google_patents.` By default, it is set to `google`. 6. [Optional] Enter the `SearchApi engine` name you want to query. Example, `google`, `bing`, `baidu`, `google_news`, `bing_news`, `google_videos`, `google_scholar` and `google_patents.` By default, it is set to `google`.
7. Click `Save`. 7. Click `Save`.
![Open WebUI Admin panel](/img/tutorial_searchapi_search.png) ![Open WebUI Admin panel](/images/tutorial_searchapi_search.png)
#### Note #### Note
You have to enable `Web search` in the prompt field, using plus (`+`) button to search the web using [SearchApi](https://www.searchapi.io/) engines. You have to enable `Web search` in the prompt field, using plus (`+`) button to search the web using [SearchApi](https://www.searchapi.io/) engines.
![enable Web search](/img/enable_web_search.png) ![enable Web search](/images/enable_web_search.png)
## Kagi API ## Kagi API

View File

@ -6,7 +6,7 @@ import { themes as prismThemes } from "prism-react-renderer";
const config: Config = { const config: Config = {
title: "Open WebUI", title: "Open WebUI",
tagline: "ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI)", tagline: "ChatGPT-Style WebUI for LLMs (Formerly Ollama WebUI)",
favicon: "img/favicon.png", favicon: "images/favicon.png",
// Set the production url of your site here // Set the production url of your site here
url: "https://openwebui.com", url: "https://openwebui.com",
@ -65,12 +65,12 @@ const config: Config = {
themeConfig: { themeConfig: {
// Replace with your project's social card // Replace with your project's social card
// image: "img/docusaurus-social-card.jpg", // image: "images/docusaurus-social-card.jpg",
navbar: { navbar: {
title: "Open WebUI", title: "Open WebUI",
logo: { logo: {
src: "img/logo.png", src: "images/logo.png",
srcDark: "img/logo-dark.png", srcDark: "images/logo-dark.png",
}, },
items: [ items: [
// { // {
@ -108,7 +108,7 @@ const config: Config = {
}, },
footer: { footer: {
logo: { logo: {
src: "img/logo-dark.png", src: "images/logo-dark.png",
height: 100, height: 100,
}, },
style: "light", style: "light",

View File

Before

Width:  |  Height:  |  Size: 4.1 MiB

After

Width:  |  Height:  |  Size: 4.1 MiB

View File

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 54 KiB

View File

Before

Width:  |  Height:  |  Size: 5.0 KiB

After

Width:  |  Height:  |  Size: 5.0 KiB

View File

Before

Width:  |  Height:  |  Size: 9.4 KiB

After

Width:  |  Height:  |  Size: 9.4 KiB

View File

Before

Width:  |  Height:  |  Size: 678 KiB

After

Width:  |  Height:  |  Size: 678 KiB

View File

Before

Width:  |  Height:  |  Size: 345 KiB

After

Width:  |  Height:  |  Size: 345 KiB

View File

Before

Width:  |  Height:  |  Size: 348 KiB

After

Width:  |  Height:  |  Size: 348 KiB

View File

Before

Width:  |  Height:  |  Size: 349 KiB

After

Width:  |  Height:  |  Size: 349 KiB

View File

Before

Width:  |  Height:  |  Size: 408 KiB

After

Width:  |  Height:  |  Size: 408 KiB

View File

Before

Width:  |  Height:  |  Size: 437 KiB

After

Width:  |  Height:  |  Size: 437 KiB

View File

Before

Width:  |  Height:  |  Size: 279 KiB

After

Width:  |  Height:  |  Size: 279 KiB

View File

Before

Width:  |  Height:  |  Size: 3.5 KiB

After

Width:  |  Height:  |  Size: 3.5 KiB

View File

Before

Width:  |  Height:  |  Size: 11 KiB

After

Width:  |  Height:  |  Size: 11 KiB

View File

Before

Width:  |  Height:  |  Size: 370 KiB

After

Width:  |  Height:  |  Size: 370 KiB

View File

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 80 KiB

View File

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 51 KiB

View File

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 57 KiB

View File

Before

Width:  |  Height:  |  Size: 5.8 KiB

After

Width:  |  Height:  |  Size: 5.8 KiB

View File

Before

Width:  |  Height:  |  Size: 5.1 KiB

After

Width:  |  Height:  |  Size: 5.1 KiB

View File

Before

Width:  |  Height:  |  Size: 6.3 KiB

After

Width:  |  Height:  |  Size: 6.3 KiB

View File

Before

Width:  |  Height:  |  Size: 96 KiB

After

Width:  |  Height:  |  Size: 96 KiB

View File

Before

Width:  |  Height:  |  Size: 280 KiB

After

Width:  |  Height:  |  Size: 280 KiB

View File

Before

Width:  |  Height:  |  Size: 266 KiB

After

Width:  |  Height:  |  Size: 266 KiB

View File

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 51 KiB

View File

Before

Width:  |  Height:  |  Size: 1.2 MiB

After

Width:  |  Height:  |  Size: 1.2 MiB

View File

Before

Width:  |  Height:  |  Size: 28 KiB

After

Width:  |  Height:  |  Size: 28 KiB

View File

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 48 KiB

View File

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 58 KiB

View File

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 56 KiB

View File

Before

Width:  |  Height:  |  Size: 284 KiB

After

Width:  |  Height:  |  Size: 284 KiB

View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

Before

Width:  |  Height:  |  Size: 13 KiB

After

Width:  |  Height:  |  Size: 13 KiB

View File

Before

Width:  |  Height:  |  Size: 1.3 MiB

After

Width:  |  Height:  |  Size: 1.3 MiB

View File

Before

Width:  |  Height:  |  Size: 128 KiB

After

Width:  |  Height:  |  Size: 128 KiB

View File

Before

Width:  |  Height:  |  Size: 79 KiB

After

Width:  |  Height:  |  Size: 79 KiB

View File

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 47 KiB

View File

Before

Width:  |  Height:  |  Size: 101 KiB

After

Width:  |  Height:  |  Size: 101 KiB

View File

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 55 KiB

View File

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 271 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 783 KiB

View File

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 31 KiB

View File

Before

Width:  |  Height:  |  Size: 35 KiB

After

Width:  |  Height:  |  Size: 35 KiB

View File

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

View File

Before

Width:  |  Height:  |  Size: 9.2 KiB

After

Width:  |  Height:  |  Size: 9.2 KiB