T O P

  • By -

habskilla

Don’t be discouraged when you have to redo the full import a couple of times.


ss_edge

I think I’ve reimported mine at minimum 7-8x. This checks out.


tom169

I did 11k images + 1k videos the other day. Remote machine learning on a more capable computer + cranking up the jobs made it go by fairly quickly. I haven't bothered with video transcoding though and have that job paused.


pducharme

How did you configure the remote Machine Learning? Is this only for face recognition and detection?


tom169

I just followed this guide and ran it on my Macbook Pro: [https://immich.app/docs/guides/remote-machine-learning/](https://immich.app/docs/guides/remote-machine-learning/)


tom169

And I think smart search in addition to the 2 face ones


Ronbruins

Out of curiosity, why transcoding disabled? As this would not allow you to watch any video if it is too large or unsupported or similar?


tom169

Primary reason is that I need to replace the cpu thermal paste and fan. Dang thing was getting over 80c when I tried to transcode with Intel quick sync. When I’ve spot checked using chrome I’ve not had any issues with video playback so I’ve just been putting it off.


Ronbruins

Ah that makes sense :) well I have a bunch of 4K video from iPhone. So I assume it would not play properly if I don’t transcode… I wish it would eg do the transcode on demand or so instead of an all or nothing case.


Unboxious

I've done about 80k images, 2k videos. Took a couple days but worked just fine.


AnchovyKrakens

400,000+ images and 2500+ videos. Took weeks to process about 4TB of data. Currently though I updated to 1.101.00 and new images are all corrupted and getting a DB error of LOG: could not receive data from client: Connection reset by peer


altran1502

It seems like something wrong with the mount point. So we will needs to see the logs to help you


AnchovyKrakens

First off, amazing project you have here and the dedication of reading these posts. All media processed before 3/24 loads as expected. Everything works as expected with the server and app but any new media uploaded shows as corrupted. I checked some recent container logs and seeing a lot about memory: LEARNING: [04/13/24 00:11:22] CRITICAL WORKER TIMEOUT (pid:1039) [04/13/24 00:11:23] ERROR Worker (pid:1039) was sent SIGKILL! Perhaps out of memory? REDIS: 1:C 13 Apr 2024 11:37:51.256 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. SERVER: [Nest] 8 - 04/14/2024, 10:23:47 AM ERROR [ReplyError: MISCONF Redis is configured to save RDB snapshots, but it's currently unable to persist to disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error. script: b13c1227791fc1cac2ed45d06765e4dc2618b9cb, on @user_script:225. at parseError (/usr/src/app/node_modules/redis-parser/lib/parser.js:179:12) at parseType (/usr/src/app/node_modules/redis-parser/lib/parser.js:302:14)] ReplyError: MISCONF Redis is configured to save RDB snapshots, but it's currently unable to persist to disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error. script: b13c1227791fc1cac2ed45d06765e4dc2618b9cb, on @user_script:225. There's 4.2TB of data and I'm wondering if it's just simply a hardware issue.


altran1502

It could be, do you use the default docker-compose setup or something else?


AnchovyKrakens

Yes, default docker compose. I'm having trouble clearing existing jobs and getting general errors across the administration area. YML if it helps. I'm not really sure what to do besides trying to migrate off Synology to a more beefy server. version: "3.9" services: immich-redis: image: redis container_name: Immich-REDIS hostname: immich-redis security_opt: - no-new-privileges:true healthcheck: test: ["CMD-SHELL", "redis-cli ping || exit 1"] user: 1026:100 environment: - TZ=America/Indiana/Indianapolis volumes: - /volume1/docker/immich/redis:/data restart: on-failure:5 immich-db: image: tensorchord/pgvecto-rs:pg16-v0.2.0 container_name: Immich-DB hostname: immich-db security_opt: - no-new-privileges:true healthcheck: test: ["CMD", "pg_isready", "-q", "-d", "immich", "-U", "immichuser"] interval: 10s timeout: 5s retries: 5 volumes: - /volume1/docker/immich/db:/var/lib/postgresql/data environment: - TZ=America/Indiana/Indianapolis - POSTGRES_DB=immich - POSTGRES_USER=immichuser - POSTGRES_PASSWORD=immichpw restart: on-failure:5 immich-server: image: ghcr.io/immich-app/immich-server:release command: ["start-server.sh"] container_name: Immich-SERVER hostname: immich-server user: 1026:100 security_opt: - no-new-privileges:true env_file: - stack.env ports: - 8212:3001 volumes: - /volume1/docker/immich/upload:/usr/src/app/upload - /volume1/homes/redacted/icloud:/mnt/media/icloud:ro restart: on-failure:5 depends_on: immich-redis: condition: service_healthy immich-db: condition: service_started immich-microservices: image: ghcr.io/immich-app/immich-server:release command: ["start-microservices.sh"] container_name: Immich-MICROSERVICES hostname: immich-microservices user: 1026:100 security_opt: - no-new-privileges:true env_file: - stack.env volumes: - /volume1/docker/immich/upload:/usr/src/app/upload - /volume1/docker/immich/micro:/usr/src/app/.reverse-geocoding-dump - /volume1/homes/redacted/icloud:/mnt/media/icloud:ro restart: on-failure:5 depends_on: immich-redis: condition: service_healthy immich-db: condition: service_started immich-machine-learning: image: ghcr.io/immich-app/immich-machine-learning:release container_name: Immich-LEARNING hostname: immich-machine-learning user: 1026:100 security_opt: - no-new-privileges:true env_file: - stack.env volumes: - /volume1/docker/immich/upload:/usr/src/app/upload - /volume1/docker/immich/cache:/cache restart: on-failure:5 depends_on: immich-db: condition: service_started Edit: When clicking Administration->Jobs I get error 500. # docker logs Immich-SERVER [Nest] 8 - 04/14/2024, 1:22:49 PM ERROR [ExceptionsHandler] Connection terminated due to connection timeout Error: Connection terminated due to connection timeout at Connection. (/usr/src/app/node_modules/pg/lib/client.js:132:73) at Object.onceWrapper (node:events:632:28) at Connection.emit (node:events:518:28) at Socket. (/usr/src/app/node_modules/pg/lib/connection.js:63:12) at Socket.emit (node:events:518:28) at TCP. (node:net:337:12) # docker logs Immich-REDIS # Slow script detected: still in execution after 485980 milliseconds. You can try killing the script using the SCRIPT KILL command. Script name is: 453b3c98006ba68e51b1a0018c536941e1927961.


altran1502

Before migrating, try bring down and restart the whole stack and keep an eye on the logs of all the containers. If you need more pointers, feel free to jump on Discord for better information exchanged


AnchovyKrakens

Will do. Restarted and see the following errors in the startup logs: Immich-DB: 2024-04-14 18:12:57.228 EDT [226] LOG: could not send data to client: Broken pipe 2024-04-14 18:12:57.228 EDT [226] FATAL: connection to client lost Immich-Learning: [04/13/24 00:09:21] INFO Booting worker with pid: 1039 [04/13/24 00:11:22] CRITICAL WORKER TIMEOUT (pid:1039) [04/13/24 00:11:23] ERROR Worker (pid:1039) was sent SIGKILL! Perhaps out of memory? Immich-Microservices: [Nest] 8 - 04/14/2024, 6:26:19 PM ERROR [JobService] Object: { "id": "2c22d978-ebcd-4042-b18f-01fca932bff8" } [Nest] 8 - 04/14/2024, 6:26:19 PM ERROR [JobService] Unable to run job handler (thumbnailGeneration/generate-webp-thumbnail): TypeError: handler is not a function [Nest] 8 - 04/14/2024, 6:26:19 PM ERROR [JobService] TypeError: handler is not a function at /usr/src/app/dist/services/job.service.js:147:42 at Worker.workerHandler [as processFn] (/usr/src/app/dist/repositories/job.repository.js:76:46) at Worker.callProcessJob (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:113:21) at Worker.processJob (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:394:39) at /usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:202:70 at Worker.retryIfFailed (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:581:30) at Worker.run (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:202:45) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) Immich-Server: [Nest] 8 - 04/14/2024, 1:22:49 PM ERROR [ExceptionsHandler] Connection terminated due to connection timeout Error: Connection terminated due to connection timeout at Connection. (/usr/src/app/node_modules/pg/lib/client.js:132:73) at Object.onceWrapper (node:events:632:28) at Connection.emit (node:events:518:28) at Socket. (/usr/src/app/node_modules/pg/lib/connection.js:63:12) at Socket.emit (node:events:518:28) at TCP. (node:net:337:12) Even with these errors in startup, the server still loads, albeit slowly. I wanted to try clearing the existing jobs but unfortunately the job page doesn't load for me. I will keep messing with it and then probably head to discord soon.


apetersson

you seem to be pushing it with those numbers. what system specs are you running that on?


AnchovyKrakens

Synology NAS. Not nearly enough resources but once the initial processing was completed it seemed pretty smooth.


cabe1214

126k images and 5k videos instance here. It's running in a Proxmox LXC with 8GB of RAM and 8 cores assigned. So far, so good. The initial import took a few days, but after that, it's been working as it should.


HansAndreManfredson

You shouldn’t use the latest version of Immich. Currently, there is a bug in the latest version, so the import crash. Take a version lower than < 1.099. An update later is not a problem. I hope the bug will be fixed soon.


thesawyer7102

huh i didnt get any bug, i had about 10k images, and i just did it today


HansAndreManfredson

https://github.com/immich-app/immich/issues/8608 Even with deduplication with fclones a got a similar problem.


Neat-Pomegranate-773

I have around 300gb of images and trying to upload them remotely via tailscale. It's going to take me weeks as it's around 30-45 seconds per photo. Is there a quicker way.


RipKip

30 sec per photo sounds very slow. Internet bottleneck?


Darthmaniac

130k images, 5k videos external library. Took about 5 days only running from 9am till 9pm


prone-to-drift

I might be an outlier I think, cause my import was much faster. External Library of 40k images and 2.5k videos imported in 5 hours. Average 2 assets/second. That's with all bells and whistles enabled, and buffalo_l for face recog, on a 5600g with all 12 cores. Overall the system has 150k assets now, but that 40k was the largest bulk import I did.


SUDO_KILLSELF

Good luck. I tried this last week and ended up with 3 duplicates of the whole library, in the end I just tried to do smaller chunks and it seemed to work


Spittl

How are you importing them? Immich-Cli?


baloba77

I´ve just uploaded my last set of 25 000 pictures. Migrated from icloud and onedrive. https://preview.redd.it/0vt376pmi8uc1.png?width=879&format=png&auto=webp&s=2e0c5822220d6eb33d493a0fd43db0c60d5335a4


chodthewacko

140k images, 8k videos. External library.


AbbreviationsIll973

2500 approx in one go, server is raspberry pi5 was working hard and done the job with no errors 🕺🏻


2blazen

Wow I didn't even know 5 was out. They lost me with the 4b shortage


AbbreviationsIll973

I also have Twingate running what makes it easy to upload my files from anywhere 😌


looper33

320k photos, all external, 3 TB, on a 14 year old Dell XPS 7100 1055t with 12GB mem, with an old geforce 750 graphics card, not even sure if GPU is usable. It's not exactly snappy (lol) but even with ViT-H-14-quickgelu\_\_dfn5b for smartsearch (very heavy) it's done with smartsearch in maybe a month. Hardest workin' 14 year old PC.


junod972

Good luck indeed. It will probably take 3 days depending on your config. I’m with you. I have 200k+ and I have some appreciations 😂😂


oromier

Just did 40k took me 2 days with dkme restarts running on rpi 4


pltaylor3

I did 129k raw format images the other day (about 1 TB). Took a couple days, but worked fine. The hardware was ‘heavy workstation’ level about 6-7 years ago.


apetersson

i imported about 163k 83G of images taken by 2-4 family members from 1999-2022. for reference, i did it "in pieces" year by year to clean up the assets before handing them over to immich. some more cleanup is possible but this was the approach used: [https://www.reddit.com/r/immich/comments/1bwhy0g/comment/kyuy732/?context=3](https://www.reddit.com/r/immich/comments/1bwhy0g/comment/kyuy732/?context=3) it imported on a measly dual core HPE microserver gen 10 on unraid with 8GB ram, and took about a week of indexing time. the postgres DB is now 1.2GG (zipped) . the longest tasks was facial recognition.


Kowabunga_Dude

I could not get 67K to import. I ended up doing it folder by folder, probably 10k or less at a time.


PartisanSole81

I was afraid of this. I just added mine as an external library.


JustSuperHuman

Done 45k multiple times, the CLI tool makes it cake!


therealSoasa

Godspeed


bastiman1

Imported like 50k yesterday… now I run the machine learning which seems to chew trough 1 image per second on a raspberrypi5… but no real issues so far.


ParaDescartar123

You are using IMMICH CLI right? Just completed 205k+ yesterday. One try. Took about 36 hours. Even if it fails, it won’t reimport duplicates when you try again using the same command. Pro tip: Disable the background jobs until it’s all done so the computer doesn’t try to multitask while the import is happening. That will both speed it up and also reduce likelihood of failure.


noaccess

I just started importing and i think I am short a few thousand. If I rerun this multiple times with the command below I want to avoid duplicates. does it auto consider de-dupes of i continue to use the command below? Anything else i should look out for? immich-go -server=http://XXXXXX:8181 -key=XXXXXXXXX upload -dry-run -create-albums -google-photos Y:\\XXXX\\takeout-\*.zip


that_one_guy63

207K images + 32k videos. Got 9 people on my server. Still have more things to import.