- Joined
- Oct 20, 2014
- Posts
- 3,244
- Solutions
- 5
- Reaction
- 3,484
- Points
- 988
Note: This is just a continuation of You do not have permission to view the full content of this post.
Log in or register now. for my AI ART Generator for cpu and portable device (and as part of req't for being "Established" he he). These apps require knowledge in running them using provided code and guides on Github. If installers are provided, then its the responsibility of the user to find other resources connected to the software. run at your own risks and read carefully the instructions.
I'm more concentrated on testing Stable Diffusion since it's opensource and have a lot of branches and developments using different modes of use
to try on CPU-mode and portable devices. I'll just provide those that worked for me for others to try on their end. For guides to use Stable diffusion, please refer You do not have permission to view the full content of this post. Log in or register now..
No silly questions please. This is for those who know how to follow instructions and use them for FREE.
======================================================================================================================
1.
Multimodal Advanced, Generative, and Intelligent Creation (MMagic [em'mædʒɪk]) : You do not have permission to view the full content of this post. Log in or register now.
Please read there detailed documention You do not have permission to view the full content of this post. Log in or register now. for for further details.
(Following the provided guide sa github link wasn't easy, so read the documentation to clear out some issues. But it works fine on cpu-mode on my end. It's a multi-tool, so try other features later.)
2. If number#1 is hard to install and run in your end, better switch to the one I mentioned in Part 1.
Stable Diffusion-NCNN: You do not have permission to view the full content of this post. Log in or register now.
Parehas silang ok sa cpu-mode din. Check ninyo na lang yung android mode na supplied dyan kay EdVince. Pang-anime lang ang models na gamit sa UI. Nasa inyo na lang kung merong makuha sa You do not have permission to view the full content of this post. Log in or register now. na iba.
I'm more concentrated on testing Stable Diffusion since it's opensource and have a lot of branches and developments using different modes of use
to try on CPU-mode and portable devices. I'll just provide those that worked for me for others to try on their end. For guides to use Stable diffusion, please refer You do not have permission to view the full content of this post. Log in or register now..
No silly questions please. This is for those who know how to follow instructions and use them for FREE.
======================================================================================================================
1.
Multimodal Advanced, Generative, and Intelligent Creation (MMagic [em'mædʒɪk]) : You do not have permission to view the full content of this post. Log in or register now.
What's New You do not have permission to view the full content of this post. Log in or register now.
New release You do not have permission to view the full content of this post. Log in or register now. [26/05/2023]:
We are excited to announce the release of MMagic v1.0.0 that inherits from MMEditing and MMGeneration.
- Support tomesd for StableDiffusion speed-up.
- Support all inpainting/matting/image restoration models inferencer.
- Support animated drawings.
- Support Style-Based Global Appearance Flow for Virtual Try-On.
- Fix inferencer in pip-install.
After iterative updates with OpenMMLab 2.0 framework and merged with MMGeneration, MMEditing has become a powerful tool that supports low-level algorithms based on both GAN and CNN. Today, MMEditing embraces Generative AI and transforms into a more advanced and comprehensive AIGC toolkit: MMagic (Multimodal Advanced, Generative, and Intelligent Creation). MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey.
We highlight the following new features.
1. New Models
We support 11 new models in 4 new tasks.
2. Magic Diffusion Model
- Text2Image / Diffusion
- ControlNet
- DreamBooth
- Stable Diffusion
- Disco Diffusion
- GLIDE
- Guided Diffusion
- 3D-aware Generation
- EG3D
- Image Restoration
- NAFNet
- Restormer
- SwinIR
- Image Colorization
- InstColorization
For the Diffusion Model, we provide the following "magic" :
3. Upgraded Framework
- Support image generation based on Stable Diffusion and Disco Diffusion.
- Support Finetune methods such as Dreambooth and DreamBooth LoRA.
- Support controllability in text-to-image generation using ControlNet.
- Support acceleration and optimization strategies based on xFormers to improve training and inference efficiency.
- Support video generation based on MultiFrame Render.
- Support calling basic models and sampling strategies through DiffuserWrapper.
By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic has upgraded in the following new features:
MMagic has supported all the tasks, models, metrics, and losses in MMEditing and MMGeneration and unifies interfaces of all components based on MMEngine.
- Refactor DataSample to support the combination and splitting of batch dimensions.
- Refactor DataPreprocessor and unify the data format for various tasks during training and inference.
- Refactor MultiValLoop and MultiTestLoop, supporting the evaluation of both generation-type metrics (e.g. FID) and reconstruction-type metrics (e.g. SSIM), and supporting the evaluation of multiple datasets at once.
- Support visualization on local files or using tensorboard and wandb.
- Support for 33+ algorithms accelerated by Pytorch 2.0.
Please refer to changelog.md for details and release history.
Please refer to migration documents to migrate from old version MMEditing 0.x to new version MMagic 1.x .
==========================================================================Introduction
MMagic (Multimodal Advanced, Generative, and Intelligent Creation) is an advanced and comprehensive AIGC toolkit that inherits from MMEditing and MMGeneration. It is an open-source image and video editing&generating toolbox based on PyTorch. It is a part of the OpenMMLab project.
Currently, MMagic support multiple image and video generation/editing tasks.
mmagic_introduction.mp4
The best practice on our main branch works with Python 3.8+ and PyTorch 1.9+.
Major features
- State of the Art Models
MMagic provides state-of-the-art generative models to process, edit and synthesize images and videos.- Powerful and Popular Applications
MMagic supports popular and contemporary image restoration, text-to-image, 3D-aware generation, inpainting, matting, super-resolution and generation applications. Specifically, MMagic supports fine-tuning for stable diffusion and many exciting diffusion's application such as ControlNet Animation with SAM. MMagic also supports GAN interpolation, GAN projection, GAN manipulations and many other popular GAN’s applications. It’s time to begin your AIGC exploration journey!- Efficient Framework
By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs. With the support of MMSeparateDistributedDataParallel, distributed training for dynamic architectures can be easily implementedContributing
More and more community contributors are joining us to make our repo better. Some recent projects are contributed by the community including:
Projects is opened to make it easier for everyone to add projects to MMagic.
- GLIDE is contributed by @Taited.
- Restormer is contributed by @AlexZou14.
- SwinIR is contributed by @Zdafeng.
We appreciate all contributions to improve MMagic. Please refer to CONTRIBUTING.md in MMCV and CONTRIBUTING.md in MMEngine for more details about the contributing guideline.
Installation
MMagic depends on PyTorch, MMEngine and MMCV. Below are quick steps for installation.
Step 1. Install PyTorch following official instructions.
Step 2. Install MMCV, MMEngine and MMagic with MIM.
pip3 install openmim
mim install 'mmcv>=2.0.0'
mim install 'mmengine'
mim install 'mmagic'
Step 3. Verify MMagic has been successfully installed.
cd ~
python -c "import mmagic; print(mmagic.version)"
# Example output: 1.0.0
Getting Started
After installing MMagic successfully, now you are able to play with MMagic! To generate an image from text, you only need several lines of codes by MMagic!
from mmagic.apis import MMagicInferencer
sd_inferencer = MMagicInferencer(model_name='stable_diffusion')
text_prompts = 'A panda is having dinner at KFC'
result_out_dir = 'output/sd_res.png'
sd_inferencer.infer(text=text_prompts, result_out_dir=result_out_dir)
Please see quick run and inference for the basic usage of MMagic.
Install MMagic from source
You can also experiment on the latest developed version rather than the stable release by installing MMagic from source with the following commands:
git clone You do not have permission to view the full content of this post. Log in or register now.
cd mmagic
pip3 install -e .
Please refer to installation for more detailed instruction.
Acknowledgement
MMagic is an open source project that is contributed by researchers and engineers from various colleges and companies. We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new methods.
We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks. Thank you all!
Please read there detailed documention You do not have permission to view the full content of this post. Log in or register now. for for further details.
(Following the provided guide sa github link wasn't easy, so read the documentation to clear out some issues. But it works fine on cpu-mode on my end. It's a multi-tool, so try other features later.)
2. If number#1 is hard to install and run in your end, better switch to the one I mentioned in Part 1.
Stable Diffusion-NCNN: You do not have permission to view the full content of this post. Log in or register now.
Di siya naglalayo sa project na ito: You do not have permission to view the full content of this post. Log in or register now.Stable-Diffusion implemented by You do not have permission to view the full content of this post. Log in or register now. framework based on C++, supported txt2img and img2img!
Zhihu: You do not have permission to view the full content of this post. Log in or register now.
Video: You do not have permission to view the full content of this post. Log in or register now.
txt2img Performance (time pre-it and ram)
per-it i7-12700 (512x512) i7-12700 (256x256) Snapdragon865 (256x256) slow 4.85s/5.24G(7.07G) 1.05s/3.58G(4.02G) 1.6s/2.2G(2.6G) fast 2.85s/9.47G(11.29G) 0.65s/5.76G(6.20G) News
2023-03-11: happy to add img2img android and release new apk
2023-03-10: happy to add img2img x86
2023-01-19: speed up & less ram in x86, dynamic shape in x86
2023-01-12: update to the latest ncnn code and use optimize model, update android, add memory monitor
2023-01-05: add 256x256 model to x86 project
2023-01-04: merge and finish the mha op in x86, enable fast gelu
Demo
You do not have permission to view the full content of this post. Log in or register now.
You do not have permission to view the full content of this post. Log in or register now.
Out of box
All models and exe file you can download from You do not have permission to view the full content of this post. Log in or register now. or You do not have permission to view the full content of this post. Log in or register now. or Release
If you only need ncnn model, you can search it from You do not have permission to view the full content of this post. Log in or register now., it would be more faster and free.
x86 Windows
- enter folder You do not have permission to view the full content of this post. Log in or register now.
- download 4 bin file: AutoencoderKL-fp16.bin, FrozenCLIPEmbedder-fp16.bin, UNetModel-MHA-fp16.bin, AutoencoderKL-encoder-512-512-fp16.bin and put them to assets folder
- set up your config in magic.txt, each line are:
- height (must be a multiple of 128, minimum is 256)
- width (must be a multiple of 128, minimum is 256)
- speed mode (0 is slow but low ram, 1 is fast but high ram)
- step number (15 is not bad)
- seed number (set 0 to be random)
- init image (if the file is exist, run img2img, if not, run txt2img)
- positive prompt (describe what you want)
- negative prompt (describe what you don't want)
- run stable-diffusion.exe
android apk
- download an install the apk from the link
- in the top, the first one is step and the second one is seed
- int the bottom, the top one the positive prompt and the bottom one negative prompt (set empty to enable the default prompt)
- note: the apk needs 7G ram, and run very slow and power consumption
Implementation Details
Note: Please comply with the requirements of the SD model and do not use it for îllégâl purposes
- Three main steps of Stable-Diffusion:
- CLIP: text-embedding
- (only img2img) encode the init image to init latent
- iterative sampling with sampler
- decode the sampler results to obtain output images
- Model details:
- Weights:Naifu (u know where to find)
- Sampler:Euler ancestral (k-diffusion version)
- Resolution:dynamic shape, but must be a multiple of 128, minimum is 256
- Denoiser:CFGDenoiser, CompVisDenoiser
- Prompt:positive & negative, both supported
Code Details
Complie for x86 Windows
- download 4 bin file: AutoencoderKL-fp16.bin, FrozenCLIPEmbedder-fp16.bin, UNetModel-MHA-fp16.bin, AutoencoderKL-encoder-512-512-fp16.bin and put them to assets folder
- open the vs2019 project and compile the release&x64
Complie for x86 Linux / MacOS
cd x86/linux
- You do not have permission to view the full content of this post. Log in or register now.
- build the demo with CMake
mkdir -p build && cd build
cmake ..
make -j$(nproc)
./stable-diffusion-ncnn
- download 3 bin file: AutoencoderKL-fp16.bin, FrozenCLIPEmbedder-fp16.bin, UNetModel-MHA-fp16.bin and put them to build/assets folder
- run the demo
Compile for android
- download three bin file: AutoencoderKL-fp16.bin, FrozenCLIPEmbedder-fp16.bin, UNetModel-MHA-fp16.bin and put them to assets folder
- open android studio and run the project
ONNX Model
I've uploaded the three onnx models used by Stable-Diffusion, so that you can do some interesting work.
You can find them from the link above.
Statements
- Please abide by the agreement of the stable diffusion model consciously, and DO NOT use it for îllégâl purposes!
- If you use these onnx models to make open source projects, please inform me and I'll follow and look forward for your next great work
Instructions
ncnn (input & output): token, multiplier, cond, conds
- FrozenCLIPEmbedder
onnx (input & output): onnx::Reshape_0, 2271
z = onnx(onnx::Reshape_0=token)
origin_mean = z.mean()
z *= multiplier
new_mean = z.mean()
z *= origin_mean / new_mean
conds = torch.concat([cond,z], dim=-2)
ncnn (input & output): in0, in1, in2, c_in, c_out, outout
- UNetModel
onnx (input & output): x, t, cc, out
outout = in0 + onnx(x=in0 * c_in, t=in1, cc=in2) * c_out
Parehas silang ok sa cpu-mode din. Check ninyo na lang yung android mode na supplied dyan kay EdVince. Pang-anime lang ang models na gamit sa UI. Nasa inyo na lang kung merong makuha sa You do not have permission to view the full content of this post. Log in or register now. na iba.
Attachments
-
You do not have permission to view the full content of this post. Log in or register now.
Last edited: