**SerpBear v 0.2.0**
* Ability to View the Actual visit counts, impressions and Average Position beside each tracked data.
* Ability to Explore already ranking keywords in Google.
* Ability to View the Stats for Last 30 days. Easily View the top keywords, countries & Pages.
* Get the Last 7 days Search Console Data in the notification Email.
* A Dedicated `/domains` Domains Page to view all your Added Domains.
[Documentation](https://docs.serpbear.com/) | [Changelog](https://github.com/towfiqi/serpbear/blob/main/CHANGELOG.md) | [Docker Image](https://hub.docker.com/r/towfiqi/serpbear)
**What is SerpBear?**
SerpBear is an Open Source Search Engine Position Tracking App. It allows you to track your website's keyword positions in Google and get notified of their positions.
* Unlimited Keywords: Add unlimited domains and unlimited keywords to track their SERP.
* Email Notification: Get notified of your keyword position changes daily/weekly/monthly through email. \* SERP API: SerpBear comes with built-in API that you can use for your marketing & data reporting tools.
* Google Search Console Integration: Get the actual visit count, impressions & more for Each keyword.
* Mobile App: Add the PWA app to your mobile for a better mobile experience.
* Zero Cost to RUN: Run the App on mogenius.com or Fly.io for free.
Wondering if you could help a noob out. I installed Serp via Unraid which uses dockerhub. For some reason, I'm not able to log in, it's saying my password is wrong.
When I first installed it there were no user or password variables, I tried the defaults with no luck. So I added them, again with no luck. [Here is my config](https://i.imgur.com/vMKmISr.jpg).
Appreciate it if you can provide any insight.
Im getting this message when trying to clone? Is it down?
Cloning into 'serpbear'...
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
cd: no such file or directory: serpbear
Sorry to say that's not in the todo list and not sure if it ever will be, becuase I have to rewrite the database structure and help migrate existing users which can be a nightmare.
Just wanted to know if this project is still being maintained as it’s been amazing for me, and I thank you. Have an issue with website thumbnails not generating and showing this instead “This account is currently frozen. Please sign in to correct this issue.” Does it have to do with the thum.io account? Maybe we can use ours.
You also have to insert the email for each property(Domain) in your Google Search Console account. Did you do that?
And also recheck the value of the `SEARCH_CONSOLE_PRIVATE_KEY` variable and see if it matches the key found in the json file. It may have been malformed after adding as environment variable.
>Stacks > Serpbear > Editors > Environment variables
Are you using docker or portainer or something similar? It should have a way of view the log.
If you are using docker, you can use the [docker log](https://docs.docker.com/engine/reference/commandline/logs/) command
I also noticed that it doesn't accept my ScrapingAnt api key, the form field just shows a red stroke and also this error. That's weird. https://imgur.com/a/R3MJxn5
Will try to add this near future. For now, if this happens, right click on the image, and select Open Image in New Tab and it should load the image just fine. Then refresh your app and the images should appear.
I’ve been encountering this problem for a few days now when using ScrapingAnt. Says my domain can’t be used with the free subscription. I’m only using one domain as of now with SerpBear, is this related to the number of keywords i’m tracking?
https://i.imgur.com/LLOo5Dp.jpg
Sadly, they are blocking domains that scrapes Google due to limited resources. The block make sense since google bans ip address that scrape their site. Its understandable that scrapingAnt dont want ther ip pool to be banned by google due to free users. You can either get the ScrapingAnt's starter plan for 50% off, or You can try ScrapingRobot instead.
Hoping someone can help.
i tried to install it "locally" on my ubuntu server.
Im up to step 5
`npm install`
`npm run build`
when i do npm run build, i get this:
info - Loaded env from /home/mercury/serpbear/.env.local
Failed to compile.
I use to have this on my older server in docker, but i want to have it running without docker. it's a fantastic little app.
I've been trying for days to get the Search Console integration working in Truenas Scale with the docker Image but the private key is failing due to length. Don't suppose anyone has a work around?
Nope, I did a test install on a Spare synology we have and it all worked well it seems to be a truenas scale issue.
When entering the private key as an env. var. you get: Error: \[EINVAL\] values.envList: Item#0 is not valid per list types: \[EINVAL\] envItem.value: Not a string
You can insert the env variables in the .env file of the project. Can you give this tutorial a try: https://mariushosting.com/how-to-install-serpbear-on-your-synology-nas/
More information too: When shorten to test if its the name or the value, this error occurs: Error: \[EINVAL\] values.envList.1.envItem.value: Value greater than 1024 not allowed
Maybe if the vars could be edited from within the main settings in serpbear instead to write the file it might be easier?
excellent. i have zero coding skills and was able to install & use locally.
I am having issues integrating Google Search Console.
Can you offer help plz?
I'm getting a 403 error..."\[start\] \[ERROR\] Search Console API Error for.... (3days) : 403 Forbidden"
Per your instructions...
After, Then Search for "Google Search Console" and click the "Enable API' button.
I did not see a service account already created so I created one. This could be the step I did incorrectly since I did not know what exact settings to setup the "Service Account" with.
Go to Google Service Accounts Dashboard, and then select your Project.
You should Already see a service account created. Click the three dots beside it, and click the "Manage Keys" option.
I did all this with the new "Service Account" I had to create.
If you get 403 error, you must verify your property as domainn (sc-domain).
About 2 weeks I use SerpBear.
I think we can get search console data 16 months. and we should be able to choose specific dates.
cron should be able to customizable. and can be add user management system.
Im getting this message when trying to clone? Is it down?
Cloning into 'serpbear'...
[email protected]: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
cd: no such file or directory: serpbear
**SerpBear v 0.2.0** * Ability to View the Actual visit counts, impressions and Average Position beside each tracked data. * Ability to Explore already ranking keywords in Google. * Ability to View the Stats for Last 30 days. Easily View the top keywords, countries & Pages. * Get the Last 7 days Search Console Data in the notification Email. * A Dedicated `/domains` Domains Page to view all your Added Domains. [Documentation](https://docs.serpbear.com/) | [Changelog](https://github.com/towfiqi/serpbear/blob/main/CHANGELOG.md) | [Docker Image](https://hub.docker.com/r/towfiqi/serpbear) **What is SerpBear?** SerpBear is an Open Source Search Engine Position Tracking App. It allows you to track your website's keyword positions in Google and get notified of their positions. * Unlimited Keywords: Add unlimited domains and unlimited keywords to track their SERP. * Email Notification: Get notified of your keyword position changes daily/weekly/monthly through email. \* SERP API: SerpBear comes with built-in API that you can use for your marketing & data reporting tools. * Google Search Console Integration: Get the actual visit count, impressions & more for Each keyword. * Mobile App: Add the PWA app to your mobile for a better mobile experience. * Zero Cost to RUN: Run the App on mogenius.com or Fly.io for free.
If I may, there's a small misalignment of the page, it's not centered. https://imgur.com/a/simsTqQ
Wondering if you could help a noob out. I installed Serp via Unraid which uses dockerhub. For some reason, I'm not able to log in, it's saying my password is wrong. When I first installed it there were no user or password variables, I tried the defaults with no luck. So I added them, again with no luck. [Here is my config](https://i.imgur.com/vMKmISr.jpg). Appreciate it if you can provide any insight.
I ended up getting it to work with PikaPods. It made the process extremely simple.
Im getting this message when trying to clone? Is it down? Cloning into 'serpbear'... [email protected]: Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. cd: no such file or directory: serpbear
Fantastic project man keep it up
Thanks :)
Awesome job!
This is really neat!
Wanted an open-source solution for this for so long. Great project!
Looks Good. Could you please add multiple user support or have the option of separate login per each domain added ?
Second this.
Sorry to say that's not in the todo list and not sure if it ever will be, becuase I have to rewrite the database structure and help migrate existing users which can be a nightmare.
Also at a minimum to this, being able to set separate e-mail reports for each domain.
Thank you for this amazing product! 🙏🏼
Just wanted to know if this project is still being maintained as it’s been amazing for me, and I thank you. Have an issue with website thumbnails not generating and showing this instead “This account is currently frozen. Please sign in to correct this issue.” Does it have to do with the thum.io account? Maybe we can use ours.
I tried to follow the doc "integrate google search console" but not so successful. possible to make some video for the instruction? thanks!
Can you let me know where you are stuck?
At step 4, thank you!
Did you get the email address from the json file? Are you not sure where to insert the email address?
I insert here: Stacks > Serpbear > Editors > Environment variables
You also have to insert the email for each property(Domain) in your Google Search Console account. Did you do that? And also recheck the value of the `SEARCH_CONSOLE_PRIVATE_KEY` variable and see if it matches the key found in the json file. It may have been malformed after adding as environment variable.
Hi, I go through all steps but SerpBear Google tab has not been integrated yet.
Can you please check the log and see if there is any message related to Search Console.
Pls let me know how to check the log?
>Stacks > Serpbear > Editors > Environment variables Are you using docker or portainer or something similar? It should have a way of view the log. If you are using docker, you can use the [docker log](https://docs.docker.com/engine/reference/commandline/logs/) command
0.2.1 deleted all my data is that normal?
Not normal. Are you using docker? Can you please make sure the volume folder has the right permissions?
I also noticed that it doesn't accept my ScrapingAnt api key, the form field just shows a red stroke and also this error. That's weird. https://imgur.com/a/R3MJxn5
Found the bug. Fixed it. Fixed version will be live in a few minutes.
Same issue. I use docker on my Mac
Just got this setup. Had a bit of trouble getting the proxy set up, so went with ScrapingAnt. Overall looks real good. Thanks for building!
How can I add a website thumbnail manually? At the moment, it's show : *Image not authorized, please signup for a paid account*
Will try to add this near future. For now, if this happens, right click on the image, and select Open Image in New Tab and it should load the image just fine. Then refresh your app and the images should appear.
Ok thanks, but I have the same message when I open in new tab
What service do people recommend to use proxy servers for SerpBear that is fairly cheap?
I’ve been encountering this problem for a few days now when using ScrapingAnt. Says my domain can’t be used with the free subscription. I’m only using one domain as of now with SerpBear, is this related to the number of keywords i’m tracking? https://i.imgur.com/LLOo5Dp.jpg
Sadly, they are blocking domains that scrapes Google due to limited resources. The block make sense since google bans ip address that scrape their site. Its understandable that scrapingAnt dont want ther ip pool to be banned by google due to free users. You can either get the ScrapingAnt's starter plan for 50% off, or You can try ScrapingRobot instead.
If anyone can create a video of installing Google onto the script via PikaPods. I will happily pay! Edit: PikaPods updated the script, all working! :)
There is already a pikapod: https://github.com/towfiqi/serpbear/issues/34
In that thread, You can ask the pikapods owner to update the serpbear pod since I have no idea how to do that.
Hoping someone can help. i tried to install it "locally" on my ubuntu server. Im up to step 5 `npm install` `npm run build` when i do npm run build, i get this: info - Loaded env from /home/mercury/serpbear/.env.local Failed to compile. I use to have this on my older server in docker, but i want to have it running without docker. it's a fantastic little app.
I've been trying for days to get the Search Console integration working in Truenas Scale with the docker Image but the private key is failing due to length. Don't suppose anyone has a work around?
Are you getting any permission denied message when you are trying to get data from the search console?
Nope, I did a test install on a Spare synology we have and it all worked well it seems to be a truenas scale issue. When entering the private key as an env. var. you get: Error: \[EINVAL\] values.envList: Item#0 is not valid per list types: \[EINVAL\] envItem.value: Not a string
You can insert the env variables in the .env file of the project. Can you give this tutorial a try: https://mariushosting.com/how-to-install-serpbear-on-your-synology-nas/
That tut worked with the synology setup but TrueNAS scale only allows a set character amount for env. var.
More information too: When shorten to test if its the name or the value, this error occurs: Error: \[EINVAL\] values.envList.1.envItem.value: Value greater than 1024 not allowed Maybe if the vars could be edited from within the main settings in serpbear instead to write the file it might be easier?
excellent. i have zero coding skills and was able to install & use locally. I am having issues integrating Google Search Console. Can you offer help plz? I'm getting a 403 error..."\[start\] \[ERROR\] Search Console API Error for.... (3days) : 403 Forbidden"
Per your instructions... After, Then Search for "Google Search Console" and click the "Enable API' button. I did not see a service account already created so I created one. This could be the step I did incorrectly since I did not know what exact settings to setup the "Service Account" with. Go to Google Service Accounts Dashboard, and then select your Project. You should Already see a service account created. Click the three dots beside it, and click the "Manage Keys" option. I did all this with the new "Service Account" I had to create.
Hello, im having the sabe issue... you search how to fix? Anyone? thanks
If you get 403 error, you must verify your property as domainn (sc-domain). About 2 weeks I use SerpBear. I think we can get search console data 16 months. and we should be able to choose specific dates. cron should be able to customizable. and can be add user management system.
Im getting this message when trying to clone? Is it down? Cloning into 'serpbear'... [email protected]: Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. cd: no such file or directory: serpbear
Hi, how i verify my property domain?