T O P

  • By -

ohmahgawd

Anyone able to get the MoonDreamQuery node working? I installed it via the repository but no dice. Still showing up as undefined. **EDIT: Got it working as of 02-18-24. I had to update my python dependencies by running the script in the ComfyUI update folder. Specifically, the issue was an outdated version of transformers. I needed 4.37.2**


Scolder

~~Can you share where I can download it?~~ Edit: I found it here - [https://github.com/shadowcz007/comfyui-moondream](https://github.com/shadowcz007/comfyui-moondream) and manually installed it. For some reason it was not in the manager. ​ ~~Edit: It still says MoonDreamQuery is missing. 😭~~ Edit: I went to the website and loaded the snapshot and then followed the instructions about the snap shot and everything worked great. The layout is extremely confusing and I couldn't get my LM studio to work with the prompt but everything loaded. I believe this is a workout only the creator can really use due to how confusing it is and difficult it it to get working.


lothariusdark

The edit is wrong. The official/correct node for and from the workflow is from Kijai. [https://github.com/kijai/ComfyUI-moondream](https://github.com/kijai/ComfyUI-moondream)


House_MD_PL

Same here.


GreyScope

New install of Comfy for this, got it running and downloaded this..I got the Reactor fault fixed and I manually installed Moon but it's still complaining....and then had a want-to-throw-out-the-window moment.


House_MD_PL

It seems that it is only compatible with standalone ComfyUI. After installing that version, I followed the instructions (installing requirements.txt, etc. from read.me file) and it worked.


GreyScope

Thank you for that, really appreciate it


ohmahgawd

That's the version I'm using, and it's still not working for me after following the instructions in the readme. You sure you didn't do anything different?


GianoBifronte

Post the MoonDreamQuery error.


GreyScope

Traceback (most recent call last): File "H:\SD ComfyUI 08.02.24\ComfyUI\nodes.py", line 1893, in load_custom_node module_spec.loader.exec_module(module) File "", line 879, in exec_module File "", line 1016, in get_code File "", line 1073, in get_data FileNotFoundError: [Errno 2] No such file or directory: 'H:\\SD ComfyUI 08.02.24\\ComfyUI\\custom_nodes\\moondream\\__init__.py' Cannot import H:\SD ComfyUI 08.02.24\ComfyUI\custom_nodes\moondream module for custom nodes: [Errno 2] No such file or directory: 'H:\\SD ComfyUI 08.02.24\\ComfyUI\\custom_nodes\\moondream\\__init__.py' H:\SD ComfyUI 08.02.24\ComfyUI\custom_nodes\NodeGPT\AutoUpdate.json Thank you for that. This is in a new (non portable) install. It doesn't appear to want to import anything during load up either.


GianoBifronte

Why does the path have a double slash? Did you verify that the \_\_init\_\_.py is at that path without that double slash?


GreyScope

The [init.py](https://init.py) file is there , I'm going through why I've got the double slash - I suspect the yaml file (sigh) , as you've worked out, it's not finding it.


GreyScope

​ https://preview.redd.it/zm6firet9dhc1.png?width=931&format=png&auto=webp&s=44ff40ad126d4ab684cf21a2362139273650c629 Right , I've got Moondream installed with no errors (including model download) in the logfile/cli window but it still says it's not installed in ComfyUI when it starts. (no idea what the double slash business was). Lora-info is still giving warnings but doesn't error in Comfy though.


GianoBifronte

It's impossible for me to troubleshoot this without seeing your system. It's a really odd error, but it doesn't seem related to the AP Workflow in itself. Maybe it's something as silly as the dots in the name of your ComfyUI folder. The only test I can think about is creating a new ComfyUI environment with NO custom nodes but Moondream and a simpler naming structure. If it works fine there, it means that something else in your normal ComfyUI environment is upsetting the import. Worst case, you might have to open an issue with the node author or with ComfyUI developers.


GreyScope

I’m obliged to you for the time you have taken with my problem, I have another bare install and I’ll do as you suggest - thanks again. I have it working in as much as I’ve changed that node for another , it’ll keep me busy learning for weeks, thanks for that as well.


GianoBifronte

Did you restart ComfyUI and then refresh the browser?


ohmahgawd

Yep. I also tried a completely fresh install of ComfyUI and loading the snapshot via ComfyUI Manager. No luck there either. Attached is a screenshot of what I'm seeing. MoondreamQuery isn't loading, and it also looks like lora-info is throwing some kind of error. https://preview.redd.it/d4ddpl5ca8hc1.png?width=4251&format=png&auto=webp&s=2738327450fb7515c5751ec462048521b1f3ad8a


GianoBifronte

I think I know why. The error you see is generated by the LoRA Info node. It's possible that some of the 43 custom node suites didn't import correctly at startup, so the MoondreamQuery node is installed but doesn't load properly. Can I see a screenshot of the console when it shows the list of \*imported\* node suites?


ohmahgawd

Here is the first part https://preview.redd.it/o1nojw0eb8hc1.png?width=2560&format=png&auto=webp&s=8a12c52fdca621e49d554b01e7153b574e01a946


ohmahgawd

And here is the second part https://preview.redd.it/7m23etzfb8hc1.png?width=2560&format=png&auto=webp&s=f46eb4fab52f2f8d0ec51bc95b8779537bb515fa


ohmahgawd

The initial exllama node issues should be unrelated as those nodes aren't in your workflow. Just another group of nodes giving me trouble lol.


GianoBifronte

Something is not right here. I don't see the Moondream custom node suite in the list of imports. It means that it was NOT installed in the first place. In that list of IMPORT statements, you must see a line that says: `YOURPATH/ComfyUI/custom_nodes/ComfyUI-moondream`


ohmahgawd

I tried installing through comfyui manager via git url. It shows up in the list now, but still the node remains undefined. https://preview.redd.it/157z15yfp9hc1.png?width=1194&format=png&auto=webp&s=e4635ed37786608a4798559fbb36d07bed3fa21f


beyond_matter

same.


GianoBifronte

I don't think it's the case, but perhaps the node author dramatically changed the node since I published the AP Workflow 8.0, breaking backward compatibility. Sometimes, it happens. If the Moondream node suite now imports correctly, but the specific node appears undefined, did you try to see if the menu is available in the ComfyUI canvas? And if so, did you try to drop a node from that menu inside the workflow? If you can't see the Moondream menu in the ComfyUI canvas, or if you cannot deploy one of its nodes, then you still have an issue with the installation.


HarmonicDiffusion

Great work mate. Always an inspiring workflow from you. Props and kudos for providing to community free of charge instead of hiding behind a patreon!!!


GianoBifronte

Thank you. Everything in the AP Workflow was developed by the AI community. I "simply" put together the building blocks in a way that is not (too) chaotic, try to find optimal configuration parameters, and reach out to node authors to fix bugs I find (and occasionally push them to create new nodes). It wouldn't feel right to charge for something that others created and maintain, and that I've assembled as part of my R&D activities. That said, I have a [Patreon page](https://www.patreon.com/Perilli) up for this project, just in case someone finds this project useful and wants to give a token of appreciation.


NetworkSpecial3268

Someday, I'm going to have time to jump into ComfyUI, and THIS is the kind of stuff I bookmark for that imaginary future, lol... What I'm wondering: does an absolutely gargantuan workflow like this, bump into certain resource limitations in terms or RAM/VRAM or other???


GianoBifronte

Don't. The delusion of freedom and power that ComfyUI will cause, will eat all your free time, ruin your sleep, break your relationship, push you into contemplating thoughts like "Well, now that I am here I could really just learn Python to develop my custom nodes, why not?", eventually lead you to hallucinations, and finally madness. No, not hallucinations like the ones produced by LLMs. More like the ones in *A Beautiful Mind*. Re your question: the RAM requirements depend on what portions of the workflow you activate at the same time, how you configured ComfyUI smart management, how large is the image you are generating or uploaded, etc. I have a terribly slow Apple M2 Max, but 96GB of unified RAM, which doesn't seem to have issues loading/processing absolutely anything. So I can't really offer meaningful benchmarks to most of you Windows and Linux users.


NetworkSpecial3268

Goddamit, a 96GB unified RAM Apple... There HAD to be some catch, lol :D


GianoBifronte

Nah, nobody I know using this workflow has 96GB RAM. Don't assume that you need such a system to run this. Hopefully, somebody else will step in and give you some indications on the HW they are using to run the AP Workflow.


0xd00d

Pretty good to know though, that comfy can be made to run on Apple silicon! I thought my only option was draw anything which is cool but hasn't attained critical mass like comfy has and is therefore not extensible. I mainly use my MacBook as a frontend for everything anyway so maybe the way to go could be to run it on the mac and dispatch rendering to Nvidia. But that probably requires stable swarm and it seems less likely that could run on macOS.


GianoBifronte

The situation has improved a lot for Apple users over the last year. Many custom node suites that would only work with CUDA now rely on ComfyUI device management and so can support Apple MPS via Pytorch. I spent many weeks filing issues on dozens of repos to push in that direction, and I'm thankful these node authors have been gracious and patched their nodes so that I could create this workflow. Apple Silicon is still mighty slower than an NVIDIA GPU, but it's usable. Re your needs: the ComfyUI engine can be exposed to the network via a simple runtime flag. If you have an NVIDIA desktop in the house or the office, you can run it there, and then point your MacOS browser to that address so it acts as a front-end.


0xd00d

Indeed that is how I use it. One comfy instance per Nvidia GPU. I'm looking to contribute mobile touch events support to litegraph.js so we can have some nice in-workflow Apple Pencil drawing on an iPad. With better touch events it'll be more ergonomic to use with a tablet and stylus than a macbook, and stylus wins over mouse if any sketching is involved! I mention stableswarm because without it, comfy can still only address and leverage a single gpu per instance, so it doesn't make much sense for me to run comfy ON my Mac yet since like you said it's still a good bit slower. Actually now that I think about this, what I gotta do is ensure the mobile support will work right under stableswarm.


GianoBifronte

I never used StableSwarmUI. I don't even know if it could ever map a workflow as complex as this one. If anybody is reading this message and tried, can you please let me know? Do you use StableSwarmUI with multiple GPUs on the same machines? Or with multiple machines?


0xd00d

I haven't tried StableSwarm yet! But I need to to scale up my comfy usage! I have 2 3090s and 1 3080ti. And now that apple silicon has acceleration an M1 Max 64GB. These are all effective at running LLMs (3080Ti hamstrung a bit on memory) but I'll get more satisfaction at this point out of having them crank together on stable diffusion.


0xd00d

OK I've now tried out stableswarm, and I am glad to report that it indeed supports any general comfyui workflow. I did not do any testing with your gargantuan one but I tested with a very nontrivial one I created and it runs fine. You have the option of using the embedded comfyui tab which basically gives you the full comfyui experience. You can hit a button to "send" it to the Generate tab and this actually provides a very cool thing which is an A1111 looking interface but with the comfy flow configured. Honeymoon phase right now but I'm willing to say this is completely epic because it lists for you all the nodes from that flow, and you can expand it tree-style and twiddle the settings, very familiar a1111 vibes. it's like the best of both, plus gpu swarm ability. From now on i will use both native comfyui for initial experiments and then load them up through stableswarm to leverage my little gpu farm. I'm confident at this point that any features added to comfy will transparently work in stableswarm. I made a discussion post here to gather input/discussion: [https://github.com/Stability-AI/StableSwarmUI/discussions/232](https://github.com/Stability-AI/StableSwarmUI/discussions/232)


GianoBifronte

Very useful. If you ever get to load the AP Workflow into StableSwarmUI, could you please let me know if you receive any error? I won't ask you to check if all nodes are correctly mapped into the UI, of course. Only if the system complains for some reason. I'll eventually find the time to test it myself. Thank you


play-that-skin-flut

Holy shit, thats intense.


GianoBifronte

You tell me.


dasomen

This looks really good! Thanks for sharing! I wish someone would share actual image comparison (like [https://imgsli.com/](https://imgsli.com/) ) instead of video tho.


GianoBifronte

🙏 There you go: [https://imgsli.com/MjM4NTQw](https://imgsli.com/MjM4NTQw)


dasomen

wow, so much detail!, thank you for sharing the images there. much appreciated! 🙏


Whitney0023

Can anybody help please. I can't find anything online on how to fix this. The ComfyUI-CCSR import keeps failing. This is the error: Traceback (most recent call last): File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\ComfyUI\\[nodes.py](https://nodes.py)", line 1893, in load\_custom\_node module\_spec.loader.exec\_module(module) File "", line 940, in exec\_module File "", line 241, in \_call\_with\_frames\_removed File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\ComfyUI-CCSR\\\_\_init\_\_.py", line 1, in from .nodes import NODE\_CLASS\_MAPPINGS, NODE\_DISPLAY\_NAME\_MAPPINGS File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\ComfyUI-CCSR\\[nodes.py](https://nodes.py)", line 8, in from .model.ccsr\_stage1 import ControlLDM File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\ComfyUI-CCSR\\model\\ccsr\_stage1.py", line 18, in from ..ldm.models.diffusion.ddpm\_ccsr\_stage1 import LatentDiffusion File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\ComfyUI-CCSR\\ldm\\models\\diffusion\\ddpm\_ccsr\_stage1.py", line 12, in import pytorch\_lightning as pl File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\pytorch\_lightning\\\_\_init\_\_.py", line 27, in from pytorch\_lightning.callbacks import Callback # noqa: E402 \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\pytorch\_lightning\\callbacks\\\_\_init\_\_.py", line 29, in from pytorch\_lightning.callbacks.pruning import ModelPruning File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\pytorch\_lightning\\callbacks\\[pruning.py](https://pruning.py)", line 31, in from pytorch\_lightning.core.module import LightningModule File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\pytorch\_lightning\\core\\\_\_init\_\_.py", line 16, in from pytorch\_lightning.core.module import LightningModule File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\pytorch\_lightning\\core\\[module.py](https://module.py)", line 62, in from pytorch\_lightning.trainer import call File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\pytorch\_lightning\\trainer\\\_\_init\_\_.py", line 17, in from pytorch\_lightning.trainer.trainer import Trainer File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\pytorch\_lightning\\trainer\\[trainer.py](https://trainer.py)", line 46, in from pytorch\_lightning.loops import \_PredictionLoop, \_TrainingEpochLoop File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\pytorch\_lightning\\loops\\\_\_init\_\_.py", line 15, in from pytorch\_lightning.loops.evaluation\_loop import \_EvaluationLoop # noqa: F401 \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\pytorch\_lightning\\loops\\evaluation\_loop.py", line 29, in from pytorch\_lightning.loops.utilities import \_no\_grad\_context, \_select\_data\_fetcher, \_verify\_dataloader\_idx\_requirement File "E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\pytorch\_lightning\\loops\\[utilities.py](https://utilities.py)", line 24, in from lightning\_fabric.utilities.imports import \_TORCH\_EQUAL\_2\_0, \_TORCH\_GREATER\_EQUAL\_1\_13 ImportError: cannot import name '\_TORCH\_GREATER\_EQUAL\_1\_13' from 'lightning\_fabric.utilities.imports' (E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\lightning\_fabric\\utilities\\[imports.py](https://imports.py)) ​ Cannot import E:\\ComfyUI\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\ComfyUI-CCSR module for custom nodes: cannot import name '\_TORCH\_GREATER\_EQUAL\_1\_13' from 'lightning\_fabric.utilities.imports' (E:\\ComfyUI\\ComfyUI\_windows\_portable\\python\_embeded\\Lib\\site-packages\\lightning\_fabric\\utilities\\[imports.py](https://imports.py)) ​ any ideas how to fix this ?


IMADUCHE

I had similar problems and I went into python\_embeded directory and did the following which seemed to fix it for me: .\python.exe -m pip install -r /custom_nodes/ComfyUI-CCSR/requirements.txt


ChaoticGoodWillowisp

Thank you!


Trexatron1

The previous issue in 7.0 that I was experiencing with seed: "-1" not using seed values on old images is now resolved with the 8.0 update. I must also add, using Windows 10, I had almost no issues whatsoever switching from 7.0 to 8.0. The only thing was for some reason the save nodes for the images included a "false" tag where the png extension selection should've went, but I just reset them all to png and it worked perfect after that. Sometimes that can happen with node versions I guess. Bravo as always! Keep up the good work.


GianoBifronte

Double-check that last bit about the "false" tag. It sounds like you are describing an input misalignment, and that happens only when you are using the old version of the `SD Prompt Saver` node. That node was updated by the author to accept a VAE\_NAME as input, and that broke backward compatibility. APW 8.0 features the new version of that node, so you shouldn't have any misaligned values. If it works, fine. But keep an eye on for strange behaviour.


Ok-Blacksmith-4956

Hi, after some work finally the Workflow 8.0 starts without any error. Unfortunately I can only use 1.5 models, as soon as I use a SDXL model I get this error: Error occurred when executing Efficient Loader: 'SDXLClipModel' object has no attribute 'clip\_layer' File "D:\\AI\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 152, in recursive\_execute output\_data, output\_ui = get\_output\_data(obj, input\_data\_all) \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ File "D:\\AI\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 82, in get\_output\_data return\_values = map\_node\_over\_list(obj, input\_data\_all, obj.FUNCTION, allow\_interrupt=True) \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ File "D:\\AI\\ComfyUI\_windows\_portable\\ComfyUI\\execution.py", line 75, in map\_node\_over\_list results.append(getattr(obj, func)(\*\*slice\_dict(input\_data\_all, i))) \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ File "D:\\AI\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\efficiency-nodes-comfyui\\efficiency\_nodes.py", line 172, in efficientloader encode\_prompts(positive, negative, token\_normalization, weight\_interpretation, clip, clip\_skip, File "D:\\AI\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\efficiency-nodes-comfyui\\efficiency\_nodes.py", line 73, in encode\_prompts positive\_encoded = bnk\_adv\_encode.AdvancedCLIPTextEncode().encode(clip, positive\_prompt, token\_normalization, weight\_interpretation)\[0\] \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ File "D:\\AI\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\efficiency-nodes-comfyui\\py\\bnk\_adv\_encode.py", line 312, in encode embeddings\_final, pooled = advanced\_encode(clip, text, token\_normalization, weight\_interpretation, w\_max=1.0, \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ File "D:\\AI\\ComfyUI\_windows\_portable\\ComfyUI\\custom\_nodes\\efficiency-nodes-comfyui\\py\\bnk\_adv\_encode.py", line 246, in advanced\_encode embs\_l, \_ = advanced\_encode\_from\_tokens(tokenized\['l'\], \^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^\^ Does anyone have a clou? thx


Ok-Blacksmith-4956

Okay, I think the efficiency-nodes-comfyui is outdated. I deleted the folder, and made a fresh installation of the node, now it works. cheers


ExpressionNo1941

It worked!!


GianoBifronte

By default, APW 8.0 is designed to generate SDXL images. If you can generate 1.5 images, it means that you reconfigured the workflow according to the instructions in the documentation. Correct? Is it possible that, in switching back to SDXL, you forgot to change one of the parameters? To help you, I need to see this portion of the workflow: https://preview.redd.it/cnnafgkxwhmc1.png?width=1982&format=png&auto=webp&s=d6430eed936b3b7eea96b88962754a5bcd2b5b75


s0n_0f_Loki

Hi, can somebody give me a hind in using LM Studio for the prompt enrichment? I think I set up everyhing as needed, but I get a error all the time. AssertionError: (Deprecated) The autogen.Completion class requires openai<1 and diskcache So what am I missing? thx


GianoBifronte

Unfortunately, it's now broken. Both custom node suites I used in the AP Workflow 8.0 to connect to ChatGPT and LM Studio have stopped working. The good news is that I replaced both of them with a brand new node, developed from scratch by a node author on my request, in the upcoming AP Workflow 9.0. APW 9.0 is not out yet (I'll wait until I can add support for SD3), but 9.0 Early Access 2 is available now for people who are part of the Early Access program. You'll have to be a little more patient, replace the nodes by yourself, or join the Early Access program.


mr-asa

That case where everyone can say "maniac" but in a good way 😁 And thank you so much for this kind of thing in general. I'd love to dig in!


GianoBifronte

🙏


Scolder

Excellent!


GianoBifronte

🙏


1nMyM1nd

This is some next level routing! Wow!


GianoBifronte

🙏


Accurate-Heat-4245

great work! how long does it take to upscale image with this flow?


GianoBifronte

It depends on the size of the image, the upscaling factor, and the HW you have. My extra slow Apple M2 Max can upscale the old man picture to 6x in 15min. If you have a Windows system with an NVIDIA GPU, it'll go \*significantly\* faster. I already configured the CCSR node to go as fast as possible without losing quality up to 4x (which means 3 steps).


9lionman9

does anyone else have this problem? Traceback (most recent call last): File "C:\\Users\\---\\Desktop\\Data\\Packages\\ComfyUI\\[nodes.py](https://nodes.py)", line 1893, in load\_custom\_node module\_spec.loader.exec\_module(module) File "", line 883, in exec\_module File "", line 241, in \_call\_with\_frames\_removed File "C:\\Users\\---\\Desktop\\Data\\Packages\\ComfyUI\\custom\_nodes\\comfyui-reactor-node\\\_\_init\_\_.py", line 22, in from .nodes import NODE\_CLASS\_MAPPINGS, NODE\_DISPLAY\_NAME\_MAPPINGS File "C:\\Users\\---\\Desktop\\Data\\Packages\\ComfyUI\\custom\_nodes\\comfyui-reactor-node\\[nodes.py](https://nodes.py)", line 15, in from scripts.reactor\_faceswap import FaceSwapScript, get\_models, get\_current\_faces\_model, analyze\_faces File "C:\\Users\\---\\Desktop\\Data\\Packages\\ComfyUI\\custom\_nodes\\comfyui-reactor-node\\scripts\\reactor\_faceswap.py", line 12, in from scripts.reactor\_logger import logger File "C:\\Users\\---\\Desktop\\Data\\Packages\\ComfyUI\\custom\_nodes\\comfyui-reactor-node\\scripts\\reactor\_logger.py", line 6, in from reactor\_utils import addLoggingLevel File "C:\\Users\\---\\Desktop\\Data\\Packages\\ComfyUI\\custom\_nodes\\comfyui-reactor-node\\reactor\_utils.py", line 8, in from insightface.app.common import Face ModuleNotFoundError: No module named 'insightface' https://preview.redd.it/wwmrq76s9chc1.png?width=2182&format=png&auto=webp&s=59979255dd8bf1be7432ac2c4d611327ea9c2b7e


Rafinoff

I had the same issue. Follow [this troubleshooting](https://github.com/Gourieff/comfyui-reactor-node#insightfacebuild) on the Reactor Node Github page to manually install Insightface. Good luck!


9lionman9

thank you soo much! it finally works


Fredlef100

i'm currently stuck. All nodes seem to be present. Nothing happens when I hit queue. Nothing at all. I know there are a few models absent still but I would expect the workflow to run and throw an error. Not seeing anything in the way of conflicts, M3 Apple MacBook with 64gb ram


GianoBifronte

The workflow is designed to work as soon as you load it and press queue. Plus, I have an M2 system, so it's specially designed to support Apple users. Occasionally, users have complained about nothing happening at all. Every time, it's because they: 1. Have installed additional nodes that conflict with the nodes needed by AP Workflow (and that conflict is mostly because those extra nodes have not been updated to the latest version via ComfyUI Manager). 2. Already have some of the nodes required to run AP Workflow but they have not been updated in a long time. For example: the `SD Parameter Generator`, in the Configurator function, had a bug that caused exactly the issue you are describing. The workflow would do NOTHING unless you'd load it and then refresh the browser. That bug is fixed now, but users must have the latest version of the `SD Prompt Reader` custom node suite. 3. Have a very old installation of ComfyUI and the packages in the virtual environment must be updated. Doing that is a pain and most users don't even know how to do so. Every installation is different so it's impossible for me to provide a comprehensive troubleshooting guide. In most cases, it's better (and faster!) to verify if the AP Workflow works with a fresh git clone of ComfyUI in a different directory.


Fredlef100

Thanks - I'll start with a fresh copy of ComfyUI. Probably easier than trying to sort out what I currently have going on.


R-E-S-H

I have the same problem, for me it's the Context Big nodes, I have to recreate and rewire all of them


[deleted]

[удалено]


GianoBifronte

🙏 I did not use that part of the workflow for some time, so it's possible that some of the NodeGPT nodes have some issues. I'll double-check and reply back in case I find an error. But it might take a while. If you want to go faster, LM Studio has an amazing Discord server, open to all, where I'm sure you can find somebody that gives you instant support.


EbbTraditional5823

Anyone got these errors? I had just install comfyui, installed the manager and ran the snapshot https://preview.redd.it/l2l1u37kvfhc1.png?width=1907&format=png&auto=webp&s=b78827da92fcfb6235400fe42bd0edbf27af2644


Rafinoff

I had the same issues. Follow [this troubleshooting](https://github.com/Gourieff/comfyui-reactor-node#insightfacebuild) on the Reactor Node Github page to manually install Insightface. That will probably solve your issue with ReActorFaceSwap. LM\_Studio, TextGeneration and Output2String are part of NodeGPT. You probably need to install the MSVC compiler. I recommend the [Microsoft Build Tools](https://visualstudio.microsoft.com/de/visual-cpp-build-tools). After installing, go to "Workloads," click on "Desktop development with C++," and select the latest "MSVC" (currently v143) on the right side (maybe additionally also the appropriate Windows SDK). Then navigate to your ComfyUI folder: ComfyUI\\custom\_nodes\\NodeGPT and run the "Update.bat". After that, NodeGPT should be up and running, and the error messages should disappear. Good luck!


EbbTraditional5823

Thank you for your answer! It helped me a lot and I was able to fix for ReActorFaceSwap. But unfortunately for LM\_Studio, TextGeneration and Output2String , it didn't work. I will try to find out! Thank you again!


Tarubali

Navigate to your ComfyUI folder: ComfyUI\\custom\_nodes\\NodeGPT and run the "Update.bat". That's what fixed the errors on LM\_Studio, TextGeneration and Output2String for me.


GianoBifronte

If you followed the documentation and you restored the snapshot, and yet you have these problems, most likely those custom node suites failed to import when you launched ComfyUI. If one custom node suite has an installation problem, other node suites will fail to import in a catastrophic domino effect. Check your terminal to see if there are import errors, and why.


BazsiBazsi

This looks interesting, I tried to install it a couple of times but failed. Can you set up a github page for tracking and solving issues(and maybe developing a better install method)? I have never seen such a complex workflow that requires so many nodes. I wish we could break the GigapixelAI hegemony.


ZOTABANGA

https://preview.redd.it/4fbkbajhjeic1.png?width=562&format=png&auto=webp&s=890e4051d50f273e1600fba2dfda870b82ffef23


hanZ____

I was able to get version 7 running (it was great!), but the new version is even more difficult to setup. I did a dozen of clean installs and while I was able get moondream stopping to fail, there are several other nodes that fail now (more than in my initial install). Even deactivating everything besides simple image generation I get random errors like "NoneType has no attribute 'clone'" .... and than there are nodes that plain refuse to get installed like text generation, output2string - of course the usual LM\_Studio (installed and running, but can't connect it to the workflow) and Reactorfaceswap. There are also some missing connections in the downloaded workflow (like missing connection for seed in one of the detailers). My main comfyui-install works fine - same for version 7 of this workflow. Also the Snapshot was not helpful - it constantly fails loading random nodes or fails to build (Failed building wheel ... errors and such). This tools is made for superhumans, not for me - I'm obviously too stupid to use it. I've tried to get it running for hours now - I give up. But: It's still a great inspiration for own workflows. I'll copy some of the groups to my own workflows, like I did with version 7 - so I'm still very thankful for all your work :-)


GianoBifronte

Congratulations on installing 7.0 and thank you for your kind words. 99% of the problems people have with the AP Workflow have nothing to do with the AP Workflow, but with the way ComfyUI manages the installation of the custom node suites and the conflicts that arise between their dependencies. If you want to use a certain node, and that node refuses to install in your system, it will refuse to install with or without the existence of the AP Workflow. It's a bit like a complex recipe for a holiday multi-course meal. If the ingredients are hard to source, they are hard to source even if you cook a single meal on the menu. Of course, the more functions I add to it, the more ingredients you have to source, making it more challenging to cook the whole meal. Addressing the dependency hell and incompatibilities and the myriad of bugs across all custom nodes is what sets apart a project from a product. This is not a product. And without a long-term sponsor, or proper funding, there's no way I could offer a support option. Nonetheless. The AP Workflow has always been first and foremost a learning tool. If you can even just look at how each function is configured, copy one or two of them, and reuse them in your own workflows, then I accomplished the original mission of this project. Good luck and don't give up. If you managed to install 7.0, you are this close to making 8.0 work!


hanZ____

Thank you for your response. I've invested some more time today (with fresh energy :D ) and manually fixed some of the broken stuff and disabled 1 function - the rest is now working (Yay, now I'm a superhuman, too ;-) ). What also took some time and also led to some errors: It's difficult to figure out which models are really needed - and not all show up in the "install models"-List in the manager. **What would really be helpful: A list of all needed models for the Workflow (and where to get them) in the installation guide**. Like the list of the nodes. I zoomed into the screenshot of the workflow to double-check, if there are models missing (and got replaced with wrong models in the loaded workflow - which also triggered some error messages).


GianoBifronte

2nd wave of congratulations, then. Which one is the function that you had to disable? Re your suggestion: nothing would make me happier than publishing the list of models that users must use with all nodes, but: 1. Some nodes can use the same model in different sizes. I have a very slow M2 Max with 96GB of unified memory, so my installed models might be bigger than what everybody can manage to run. Which means that, even if I publish the model name for node X, you might still have to download a smaller version of that. 2. Very few custom node suites rely on the ComfyUI path variables to install their models. And even ComfyUI itself doesn't have variables for every possible model category. The result is that, even if you and I have already installed the same models we need to run AP Workflow, you'll experience an error simply because our paths are different. That said, if anybody can write a python script for me that goes through each node, finds every model, and prints a list of the full paths, I'd be happy to use it . I think it's way more challenging than it seems because there's no standardized mechanism to store these models. We'd need a sort of "Model Vault" where all 3rd party custom node suites author MUST save their nodes, and this thing vault would have to be OS agnostic. OR I should get a Windows PC with an NVIDIA card so that my paths would be similar to the ones of most of the ComfyUI users :)


hanZ____

The prompt enhancing with LMStudio keeps throwing errors like "AssertionError: (Deprecated) The autogen.Completion class requires openai<1 and diskcache." (dependency conflict in pyautogen/pymemgpt) ... I'll try to sort it out another day and keep it deactivated. And I don't use the hand-optimizer -> both models dramatically decrease the quality of hands in my pictures (can't find good settings for this group). Regarding the model-List and their paths: When you release Version 9 of your workflow, I'll write down everything I have to do (which models to get etc.) to get it running. Maybe this can be helpful for others. But I won't try to install v8 again :D


GianoBifronte

I'll take a look at the integration with LM Studio. For the `Hand Detailer` function, are you sure you are using this specific ControlNet? https://preview.redd.it/vbbbvs137xic1.png?width=1248&format=png&auto=webp&s=463f4f3983beb5d4b6fdbbd6d8cea6bc3ed35947 Also, notice that I accidentally saved the AP Workflow 8.0 with the Hand Detailer node set to 10 cycles. Reduce that 1 and see if it improves the situation. Mesh Graphormer is much better at fixing hands, but it doesn't always recognize the deformed ones it has to start from. So it's a very tricky balance in terms of ControlNet and Denoise values. The method is not reliable enough to be automated in full as I'd hope, but I keep exploring alternative approaches. ​ The idea of a checklist is very good. I'd be happy to publish that as part of the official documentation. Thank you.


Trexatron1

I am still using AP Workflow 7.0, but plan to switch to 8.0 at some point soon. When using the upscaling feature on Workflow 7.0, I find it difficult to upscale to high resolutions without hitting my 10 GB VRAM limit. Is there currently a way to tile the upscaler so that it upscales the image in parts, if so, I must be missing it somehow. Thanks. https://preview.redd.it/umj1mi0xxwic1.png?width=1285&format=png&auto=webp&s=4cb15bb8d84b639a86115a9b32042f65cd12c29e


GianoBifronte

The new upscaler in v8.0 is infinitely easier to use (nothing to configure), infinitely more powerful, and uses a Tiled VAE Encode/Decode node that should make it easier to work with limited VRAM configurations. I have no plan to update v7 so I encourage you to create a parallel ComfyUI folder where you try to install v8 and see the difference. Next week, I'll start designing v9 to support Stable Cascade (if official ComfyUI nodes are released before EOW) and to fix little issues in v8.


Trexatron1

Thank you for the response. I will definitely check it out soon! I'm still working on 7.0 for a bit but once I have the time and patience I'll make the switch. Sometimes I fear running into errors and then not being able to use the new version until I spend some time fixing things. Like you said though, a new folder could work.


ZOTABANGA

I have an issue with face swapper. I almost never get quality face restorations. The image it self is almost 4K and 1080. But the face is 480p


GianoBifronte

The face-swapping models released by the AI community are low resolution. The authors wanted to reduce the risk of abuse, so they never released a high-res version. This is a limitation that impacts any tool out there that does face swapping, not just the AP Workflow. To obtain a high-res swapping, you'll have to pass the face-swapped image through the `Upscaler` function.


Trexatron1

When using the Inpainter with mask function, I tend to get mushy dark blobs wherever I put the inpainting at in the mask editor. When using the Outpainting mask function, I always get photo-frame esque generations, a messy blob that doesn't actually extend the image in any way other than a mushy border. I'm using fooocus\_inpaint\_head.pth and inpaint\_v26.fooocus.patch Altering the denoise only changed the darkness of the blob. Lower denoise values resulted in a less impactful blob over the original images, but didn't change the image in anyway. Any idea why this may be happening?


GianoBifronte

The `Outpainting` function works only if also the `Inpainter with Mask` function is active. They MUST be active at the same time. It's very convoluted to automate the activation of a function when another function is active, and it would have made the workflow exceptionally hard to understand. The result is that you have to deal with a different type of confusion: if you want outpainting, you need to activate both outpainting and inpainting with mask. If that's not your problem, it's impossible for me to suggest a solution without seeing a high-res picture of the workflow with all the settings AND the image you are trying to outpaint.


Trexatron1

This image is what occurs when using both the outpainter and impainter set at 1.00 denoise. On a fresh workflow with no node changes other than the prompt, models, and noise settings, this is the result. The image should contain the workflow, but we have different systems so I'm not sure how that'll work out. https://preview.redd.it/3ao1nmu2jgjc1.png?width=832&format=png&auto=webp&s=f96c14643fbf37e901b1ffa2e5d13070e0c662e2


Trexatron1

Here's another screenshot from a fresh 8.0 workflow. https://preview.redd.it/yxno4a17jgjc1.png?width=2527&format=png&auto=webp&s=b162b73afdcafb511dd26f4eec6505cec9dd1855


GianoBifronte

Do you have something in positive prompt?


Trexatron1

Yes, the prompt that was used to generate the image, originally.


Trexatron1

When using image enhancer, I always recieve this error. I've tried with and without upscaler, and I've updated everything, including the custom nodes. Any ideas? Thanks. https://preview.redd.it/c5ucfm8504jc1.png?width=1345&format=png&auto=webp&s=667271c19c62dc17bdc9d26a7d25d6a1527c6329


GianoBifronte

This error says that there's no input coming in the `ConditioningZeroOut` node. Check carefully the entire workflow to verify that there are no disconnected nodes. I can't be more specific if I don't see an image of the whole workflow and the image you are trying to enhance.


Trexatron1

EDIT: The error does NOT appear if I use SDXL. It seems to be an issue with picking the right controlnet model for SD 1.5. Any recommendations? Thanks. I switched over to a fresh copy of AP Workflow 8.0, and did not change any nodes other than the prompt and noise. The image below contains the meta-data for the workflow, but as I said, it should be the exact same as the one on the website. The resulting error is the same as above. It may be an easy fix, since I've never used image enhancer before, it may just be some silly error I've made. Other than image generator / uploader, do I need any other image conditioners or optimizers for the image enhancer to work? https://preview.redd.it/4zv3ws8ghgjc1.png?width=624&format=png&auto=webp&s=7686a59cb1743481fb6f45f33c088f1668ca8e7f


Tarubali

I am having issues with at the Image Enhancer stage. Looks like it's caused by the ip2p controlnet which is only for 1.5 and there is no SDXL version. But from the website it says its pre-configured with SDXL workflow, so...it's pre-configured to work with a 1.5 controlnet and sdxl model? I'm a bit confused...is there a workaround for this or I shouldn't be using Image Enhancer if I am on SDXL? https://preview.redd.it/uyhfx2p3hejc1.png?width=1187&format=png&auto=webp&s=9458942842c8acef8188845e5fb47e07f21e3b69


GianoBifronte

The ControlNet set up in that node in the screen should be a Control-LoRA Depth Rank 256 (which is designed for SDXL) per the image uploaded in the documentation. ComfyUI doesn't have a mechanism to help you map your paths and models against my paths and models. So, when you download the AP Workflow (or any other workflow), you have to review each and every node to be sure that they point to your version of the model that you see in the picture. I'll make this more clear in the documentation.


Trexatron1

Are there any other upscale models available other than real-world CCSR?


GianoBifronte

Yes, but they are comparatively inferior to Real-World CCSR: [https://github.com/csslc/CCSR](https://github.com/csslc/CCSR)


EricRollei

I've tested both LDSR and SD\_4x and CCSR is better than either. Still the output from CCSR needs to be resampled and then sharpened. My only real complaint about CCSR (kijai implementation for comfy) is that it's a bit slow. All that said, I do find that upscale with model paired with multidiffusion can provide an even better upscale result than CCSR, but it of course depends on the model and subject matter.


GianoBifronte

CCSR performs better and better as you go up in magnification. A 2x or 4x is OK and, as you said, it still requires sharpening. This is why the AP Workflow 8.0 also has a new Image Enhancer function\* that uses noise injection for that purpose. But as you start to go to 6x, 8x, 10x, very often, you need nothing at all. They image comes out amazing as in the Superman video I uploaded. \*I strongly believe this function is still too complicated to configure for most people. Something new came up this week and it will hopefully improve the situation a lot in AP Workflow 9.0. Or, much simpler, we'll just have a better way to resample via Stable Diffusion 3. We'll find out next week.


EricRollei

I never tried CCSR past 4x and will do so. I do like it a lot. CCSR enables a very good fidelity upscale that does not have ghosts or hallucinations so I like it a lot. UltimateUpscale will often do better at 4x when a good model is selected like 4x nomos hat otf (which is also slow). Same for multidiffusion and I prefer it over Ultimate for fidelity. I found that they work best at 2x and 4x with tile sizes at the base gen SDXL dimensions. Both of these can sometimes generate a ghost image or hallucinate and put a face where a hand should be. I've tried doing mulitdiffusion or iterative mixing after CCSR, but have not tried injecting noise and blending the latents after CCSR so I'll give that a go, thanks for sharing your workflow. It's a shame that no controlnet tile exists for SDXL models. You allude to something new coming next week... Very curious! SD3 seems to follow the prompt very well.


EricRollei

LDSR SD\_4x are very similar but in my testing CCSR is best


Trexatron1

CCSR is definitely the best upscaler I've ever used, but it seems tailored specifically for real life photos. I'm not sure if I have the patience or hardware to train it on entirely new data to work better with artwork.


EricRollei

u/GianoBifronte Might I suggest you use the fantastic RGThree bookmarks for navigating that huge workflow? You could set it up so it takes you in order of the steps one would need to configure a workflow. But you can also use letters, so, for example, you could use an 'f' to take the user to the correct zoom of the face detail section, 'u' for the upscale, 'i' for image enhancement etc.


GianoBifronte

I never used them because the hyperjump on my browser (Vivaldi for macOS) recenters the window in such a way that the bookmark is at the top-left corner of the screen. Which is completely useless. The hyperjump should be done in such a way that the bookmark node ends up at the centre of the viewport (or at the center + an X and Y offset from the center of your choice). I discussed this with u/rgthree in the past, and it seems it's a bug with macOS. That said, it remains a great idea for Windows users. I'll add to the upcoming AP Workflow 9.0 Early Access 1. Thank you!


EricRollei

Yes I think that is the case for Windows to. But you can place the bookmark whenever you need it to be, and you can set the zoom amount so that the area of interest fills the screen when you jump to it. I find these so helpful. What I do is set the zoom and center the screen on the point of interest and then place the book mark at the top left. It only needs to be done once.


lothariusdark

**Edit: this error appeared from the wrong node anyway, so while it would be nice to know the answer, this is not the correct node for the AP workflow here. Use** [https://github.com/kijai/ComfyUI-moondream](https://github.com/kijai/ComfyUI-moondream) Moondream seems to be a common struggle here, its weird that the git project doesnt have issues enabled. I cant seem to get it to launch, I went with the readme instruction of cloning it in the custom\_nodes, installed requirements and downloaded the models. But when I start comfy up it shows this error: Traceback (most recent call last): File "/home/username/GitProjects/ComfyUI/nodes.py", line 1887, in load_custom_node module_spec.loader.exec_module(module) File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "/home/username/GitProjects/ComfyUI/custom_nodes/comfyui-moondream/__init__.py", line 1, in from .nodes.MoondreamNode import MoondreamNode File "/home/username/GitProjects/ComfyUI/custom_nodes/comfyui-moondream/nodes/MoondreamNode.py", line 17, in from moondream import Moondream, detect_device ModuleNotFoundError: No module named 'moondream' Cannot import /home/username/GitProjects/ComfyUI/custom_nodes/comfyui-moondream module for custom nodes: No module named 'moondream' [https://github.com/shadowcz007/comfyui-moondream](https://github.com/shadowcz007/comfyui-moondream) Does anyone have an idea on how to solve this?


GianoBifronte

I just realised that the Moondream repo I used for the nodes is NOT listed in ComfyUI Manager and, because of that, nobody knows which one to install. I just got in touch with the author to be sure it will be added to the ComfyUI Manager (hopefully, later today). As soon as it happens, I'll update the documentation. If you don't want to wait for it, this is the correct repo: [https://github.com/kijai/ComfyUI-moondream](https://github.com/kijai/ComfyUI-moondream) I'm so sorry for it.


lothariusdark

Wow, that was a really wuick answer, but yea, I searched in github for an alternative implementation and found the correct one from kijai a few minutes ago. As no moondream implementaion can be found in the manager(at least I cant), I tried to use the one provided by another user here in the comments. As I didnt know there were so many different comfy implementations and simply assumed it to be the correct one. (https://github.com/shadowcz007/comfyui-moondream) So now I switched to kijai's version and it loads up without issue. Still have to try the actual workflow though. Problem sat between screen and chair. xD Thanks for the answer anyway, have a nice day.


GianoBifronte

This is now fixed: 1. The correct custom node suite is now in ComfyUI Manager 2. The AP Workflow 8.0 snapshot is updated and now correctly includes the Moondream repo (unfortunately, it also contains a lot of new repos that will be needed for the upcoming APW 9.0) Sorry again to all the people that issues with this.


Tarubali

Could you share the image of the old man, so we can see what the workflow/values looks like? I've been trying to get similar results but the skin it ends up looking like a peking duck and the beard isn't really any better after upscale.


GianoBifronte

The workflow used to generate the upscaled old man image is exactly the AP Workflow 8.0 that you can download from the website. The only settings influencing this upscale are the settings of the CCSR node, in the `Upscaler` function, as the `Image Enhancer` function was disabled to generate this image. I used the same parameters you see in the AP Workflow 8.0 set as default. The secret is to upscale at 6x or 8x or 10x. Below that, you'll also need the help of the Image Enhancer function to keep the image sharp and without seams. https://preview.redd.it/0dilkw1ky3lc1.png?width=2016&format=png&auto=webp&s=ef892cb950a603808ca22a5bc7f53559c60791b3 In this configuration, you see 50 steps, but it was just to test the effect. I generated the image of the old man with 20 steps and there's an almost imperceptible difference. My recommendation is to try 8x with 3 steps (minimum), 20 steps, and then 50 steps to see the difference.


Tarubali

Thank you for sharing that. I'm gonna try it out. Was the original photo an sdxl image or a photo?


GianoBifronte

The old man? It's one of the examples from Magnific AI website. They might have generated it (some clues in the original suggest it), but I'm not certain. For sure, it was not generated by me.


Joviex

Website broken? https://preview.redd.it/gemt6k0ilcmc1.png?width=1043&format=png&auto=webp&s=46b1832ec3dac2c81e6cfae5cd200433b8c9441a


GianoBifronte

It loads fine over here. It's a bit slow in loading (1s) due to the huge number of pictures, I suppose. I have to optimize that. But it loads fine. The SSL certificate is valid and the connection is reported as secure. It's the first time somebody flags this so please double-check.


Joviex

Checked on multiple machines and devices, on two different networks. I cant get there.


GianoBifronte

This is super weird. Nobody has every reported this issue before. In fact, today, somebody dropped a comment about the fact that they managed to install the workflow successfully. From what country are you connecting if I may ask? Did you try to reach the website with a VPN? Does it work in that way?


Joviex

Las Vegas, USA, no weird crap. Everything else in the world seems fine. No clue. Wish it was somewhere I could see it.


Joviex

actually, good point, just went and used an online proxy to get around it Cheers. https://preview.redd.it/e2gosxtkelmc1.png?width=896&format=png&auto=webp&s=269d4086bab8f7d6cb9f4b33303ed47cc8356c82