Thx @kaliyuga. forewarned is forearmed...
A question : Didn't you train your own model ? If so, i may imagine that if you can craft a dataset with pure public domain picture and collect all credits in a file that might be used as reference for commercial use... A bit tricky but it might be a temporary solution until better clarity is made in the future.
I've built a number of AI models, but right now I'm mostly working with a lot of things others built; Basically, I'm just not tokenizing any art I've made using models I didn't personally train. My instinct is that posting art made with others' models on Hive and being rewarded by the community isn't really the same as "commercial use," though, so I'm doing plenty of that with things I've made using VQGAN + Clip and Clip-guided Diffusion colab notebooks. Basically, I'm making a good-faith effort not to screw anyone over, and hopefully there will be clearer legal guidance soon :)
I agree, posting is not commercial use, there is no real trade, it's the post that is rewarded. This is not claiming copyright ownership...
Oh Clip-guided Diffusion ... did you succeed to get some results under a free Colab ? If yes i would be interested to know which notebook you found, because basically under a free Colab, i have to wait hours and hours to get a 256*256 size picture (usually i get a K80 with 12Gb GPU...). I wonder if there are some optimized notebook that would require less resources to play with Guided diffusion, because the produced pictures looked quite more polished and detailed than VQGAN. So far i did not find.
I've been using the 256x256 HD Rivershavewings notebook, and it does take a remarkably long time, but it's a more resource-intensive process than VQGAN. I'm pretty sure a less resource-intensive notebook using the same underlying architecture would produce much less accurate results. I find the diffusion notebook more than worth the wait, honestly; I've gotten some incredible results using it. 256x256 is definitely small, but I upscale my outputs using an ESRGAN notebook by @jotakrevs. Successful prompt engineering for diffusion is different than for VQGAN+Clip, too. Just by feel alone, I'd say VQGAN feels more like sculpting with words/poetry and Diffusion feels more like building/architecture. I don't know if that makes sense.
You both had an interesting talk here! I cannot access the notebook due permissions... maybe you shared the link that only works with people you add to the list.
Diffusion, first time I hear about. I'm eager to test this stuff 😛
Here's a link to the tweet announcing the notebook! That should work for you :)
I think you'll really love Clip-Guided Diffusion. It produces more realistic/accurate results than VQGAN+Clip, and diffusion models outperform GANs in general, as well :)
Hi @jotakrevs. First off, sorry to butt in like this in this comment thread. I have seen amazing pictures of clip guided diffusion by RiversHaveWings on her twitter feed. Just look at those images and you'll be tempted to try it out. Thank you @kaliyuga for posting the links to the notebooks. I too will check it out later.
This is for those who have Colab Pro, needs 16 GB of video memory :
@kaliyuga thanks a lot, I'll try it soon, got busy with my current AI artworks.
@dbddv01 oh, that looks interesting, I'll take a look, thanks!
@lavista wow, 512 sounds better than 256... I have the cheap account of Colab Pro, so I get usually T4 GPU, that is not enough, and time to time P100. I'll try this notebook when I get a P100. Don't need to say sorry, I was the first one to butt in in this chat hehe.
Thx @kaliyuga. forewarned is forearmed...
A question : Didn't you train your own model ? If so, i may imagine that if you can craft a dataset with pure public domain picture and collect all credits in a file that might be used as reference for commercial use... A bit tricky but it might be a temporary solution until better clarity is made in the future.
I've built a number of AI models, but right now I'm mostly working with a lot of things others built; Basically, I'm just not tokenizing any art I've made using models I didn't personally train. My instinct is that posting art made with others' models on Hive and being rewarded by the community isn't really the same as "commercial use," though, so I'm doing plenty of that with things I've made using VQGAN + Clip and Clip-guided Diffusion colab notebooks. Basically, I'm making a good-faith effort not to screw anyone over, and hopefully there will be clearer legal guidance soon :)
I agree, posting is not commercial use, there is no real trade, it's the post that is rewarded. This is not claiming copyright ownership...
Oh Clip-guided Diffusion ... did you succeed to get some results under a free Colab ? If yes i would be interested to know which notebook you found, because basically under a free Colab, i have to wait hours and hours to get a 256*256 size picture (usually i get a K80 with 12Gb GPU...). I wonder if there are some optimized notebook that would require less resources to play with Guided diffusion, because the produced pictures looked quite more polished and detailed than VQGAN. So far i did not find.
I've been using the 256x256 HD Rivershavewings notebook, and it does take a remarkably long time, but it's a more resource-intensive process than VQGAN. I'm pretty sure a less resource-intensive notebook using the same underlying architecture would produce much less accurate results. I find the diffusion notebook more than worth the wait, honestly; I've gotten some incredible results using it. 256x256 is definitely small, but I upscale my outputs using an ESRGAN notebook by @jotakrevs. Successful prompt engineering for diffusion is different than for VQGAN+Clip, too. Just by feel alone, I'd say VQGAN feels more like sculpting with words/poetry and Diffusion feels more like building/architecture. I don't know if that makes sense.
Thx for your answer. I think we are indeed aligned. If i find alternative useful resources, i'll share it.
Please do!!!
You both had an interesting talk here! I cannot access the notebook due permissions... maybe you shared the link that only works with people you add to the list.
Diffusion, first time I hear about. I'm eager to test this stuff 😛
Here's a link to the tweet announcing the notebook! That should work for you :)
I think you'll really love Clip-Guided Diffusion. It produces more realistic/accurate results than VQGAN+Clip, and diffusion models outperform GANs in general, as well :)
Oh yay thanks! I'll check it later!
Hi @jotakrevs. First off, sorry to butt in like this in this comment thread. I have seen amazing pictures of clip guided diffusion by RiversHaveWings on her twitter feed. Just look at those images and you'll be tempted to try it out. Thank you @kaliyuga for posting the links to the notebooks. I too will check it out later.
This is for those who have Colab Pro, needs 16 GB of video memory :
@kaliyuga thanks a lot, I'll try it soon, got busy with my current AI artworks.
@dbddv01 oh, that looks interesting, I'll take a look, thanks!
@lavista wow, 512 sounds better than 256... I have the cheap account of Colab Pro, so I get usually T4 GPU, that is not enough, and time to time P100. I'll try this notebook when I get a P100. Don't need to say sorry, I was the first one to butt in in this chat hehe.
Regards to all!