# Generative Machine Learning GAN

## GAN

The dataset was web-scraped for an original 20k samples, then a custom MRCNN model was trained for image segmentation and cropping before being fed into the 128 DCGAN, trained on local hardware, 1660

* [Collections of GANs : Pytorch implementation of unsupervised GANs.](https://github.com/w86763777/pytorch-gan-collections)

## Generative Model Course

* [CSE 599](https://courses.cs.washington.edu/courses/cse599i/20au/)
* [deepgenerativemodels](https://deepgenerativemodels.github.io/notes/index.html)

## GAN Course

* [Deep Learning Specialization - DeepLearning.AI](https://www.deeplearning.ai/program/deep-learning-specialization/)
* [Machine Learning for Musicians - Berklee](https://college.berklee.edu/courses/mtec-345)
* [MUSIC DATA MINING - a Music Information Retrieval (MIR) Online Course at Kadenze](https://www.kadenze.com/courses/machine-learning-for-music-information-retrieval/info)
* [Arts and Entertainment Technology - Online Course - Kadenze](https://www.kadenze.com/courses/foundations-of-arts-and-entertainment-technologies-i/info)
* [Introduction to Generative Arts and Computational Creativity - an Online Course at Kadenze](https://www.kadenze.com/courses/generative-art-and-computational-creativity/info)
* [UAL - Apply Creative Machine Learning - Institute of CodingInstitute of Coding](https://instituteofcoding.org/courses/course/ual-apply-creative-machine-learning/)
* [Machine Learning for Musicians and Artists - an Online Machine Art Course at Kadenze](https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists/info)
* [Artificial Images - YouTube](https://www.youtube.com/user/bustbright/playlists)

## GAN Project/Paper

* [NVlabs/stylegan3: Official PyTorch implementation of StyleGAN3](https://github.com/NVlabs/stylegan3)
* [AI Demos - NVIDIA Research](https://www.nvidia.com/en-us/research/ai-demos/) GAN
* [HyperStyle](https://yuval-alaluf.github.io/hyperstyle/) StyleGAN Inversion
* [Pollinations.AI](https://old.pollinations.ai/)

## Style GAN

* [Making Anime Faces With StyleGAN · Gwern.net](https://www.gwern.net/Faces?ref=mlnews#examples)
* [cedricoeldorf/ConditionalStyleGAN: Conditional implementation for NVIDIA's StyleGAN architecture](https://github.com/cedricoeldorf/ConditionalStyleGAN)
* [NVlabs/stylegan: StyleGAN - Official TensorFlow Implementation](https://github.com/NVlabs/stylegan)
  * [P StyleGAN on Oxford Visual Geometry Group Flowers 102 Dataset 💐🌻🌷🥀🌺🌹🌸🌼 : computervision](https://old.reddit.com/r/computervision/comments/bfcnbj/p_stylegan_on_oxford_visual_geometry_group/)
  * [Visual Geometry Group - University of Oxford](https://www.robots.ox.ac.uk/~vgg/data/flowers/)
* [colinrsmall/ehm\_faces](https://github.com/colinrsmall/ehm_faces)
* [StyleGAN versions](https://nvlabs.github.io/stylegan2/versions.html)
* [t04glovern/stylegan-pokemon: Generating Pokemon cards using a mixture of StyleGAN and RNN to create beautiful & vibrant cards ready for battle!](https://github.com/t04glovern/stylegan-pokemon)

## Paper

* [2011.05552End-to-End Chinese Landscape Painting Creation Using Generative Adversarial Networks](https://arxiv.org/abs/2011.05552)
* [1812.04948 A Style-Based Generator Architecture for Generative Adversarial Networks](https://arxiv.org/abs/1812.04948)
* [EditGAN](https://nv-tlabs.github.io/editGAN/)

## GAN Image Superresolution

* [Cupscale](https://github.com/n00mkrad/cupscale)
* [Real-ESRGAN](https://github.com/xinntao/Real-ESRGAN)
* [hi, generate with your photo](https://filter.mot.omg.lol/)

## GAN

* [Making Anime Faces With StyleGAN · Gwern.net](https://www.gwern.net/Faces)

## Style-GAN

* [junyanz/CycleGAN: Software that can generate photos from paintings, turn horses into zebras, perform style transfer, and more.](https://github.com/junyanz/CycleGAN)
* [kaonashi-tyc/zi2zi: Learning Chinese Character style with conditional GAN](https://github.com/kaonashi-tyc/zi2zi)
* [rosinality/style-based-gan-pytorch: Implementation A Style-Based Generator Architecture for Generative Adversarial Networks in PyTorch](https://github.com/rosinality/style-based-gan-pytorch)
* [taki0112/StyleGAN-Tensorflow: Simple & Intuitive Tensorflow implementation of StyleGAN (CVPR 2019 Oral)](https://github.com/taki0112/StyleGAN-Tensorflow)
* [mtobeiyf/sketch-to-art: 🖼 Create artwork from your casual sketch with GAN and style transfer](https://github.com/mtobeiyf/sketch-to-art)
* [taki0112/UGATIT: Official Tensorflow implementation of U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation (ICLR 2020)](https://github.com/taki0112/UGATIT)
* [zeka-io/selfie-to-anime](https://github.com/zeka-io/selfie-to-anime)
* [boistud/StyleArtGan](https://github.com/boistud/StyleArtGan)
* [heavenstobetsy/ArtGenerationwithStyleGan: Fauvist art generation using StyleGAN](https://github.com/heavenstobetsy/ArtGenerationwithStyleGan)
* [zhenxuan00/triple-gan: See Triple-GAN-V2 in PyTorch: https://github.com/taufikxu/Triple-GAN](https://github.com/zhenxuan00/triple-gan)
* [Mawiszus/TOAD-GAN: Official repository for "TOAD-GAN: Coherent Style Level Generation from a Single Example" by Maren Awiszus, Frederik Schubert and Bodo Rosenhahn.](https://github.com/Mawiszus/TOAD-GAN)
* [schrum2/GameGAN: Interactive GAN evolution of Mario and Zelda levels.](https://github.com/schrum2/GameGAN)
* [changebo/HCCG-CycleGAN: Handwritten Chinese Characters Generation](https://github.com/changebo/HCCG-CycleGAN)

## Research

* [An enhanced 3D model and generative adversarial network for automated generation of horizontal building mask images and cloudless aerial photographs - ScienceDirect](https://www.sciencedirect.com/science/article/pii/S1474034621001336?via%3Dihub)

## Research

* [Realless](https://realless.glitch.me/) Generative webs with blinking eyes

## GAN

* [Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior](https://huggingface.co/spaces/akhaliq/GFPGAN)

Stable Diffusion

* [The Illustrated Stable Diffusion–Jay Alammar–Visualizing machine learning one concept at a time.](https://jalammar.github.io/illustrated-stable-diffusion/)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://irosyadi.gitbook.io/irosyadi/digitalmedia/generative-ml.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
