In my first blog post on vibe coding with Claude Desktop and MCP tools, I was winging it. I wasn’t sure what to expect. I didn’t have a real plan and just wanted to see what I would get. I gave Claude vague directions and let it do what it wanted.
One thing it did was create use JavaScript for the web scraper. I get it kind of assumed that since I used Gatsby for the website. But I really wanted to use Python Scrapy, because I have already created a few scrapers with it in the past. Using it would allow me to correct any wrong turns quickly. I am just learning Gatsby and with AI writing most of my code, one unknown is enough.
Spoiler alert: by the end of this blog post, I decided using Claude Desktop was not the most efficient way to code this way and found a better tool. One benefit to using Claude desktop though is that if all you have is a Claude Chat subscription, you can use this method to have it write code for you.
Converting the Node Scraper to Python
In my last post, I used two chats: one to create a useable blog site out of a default Gatsby installation and one to build a scraper to find writing markets. This time I continued the second chat on the web scraper. I started with this prompt:
I am considering changing the scraper from using JavaScript to using Python and Scrapy with all of the same features.
Claude responded with the benefits of this change and laid out the structure of the new scraper:
eristoddle-scraper/
├── scrapy.cfg # Scrapy project configuration
├── requirements.txt # Dependencies
├── setup.py # For installing as a package
├── eristoddle_scraper/
│ ├── __init__.py
│ ├── items.py # Market data schema
│ ├── middlewares.py # Custom request/response handling
│ ├── pipelines.py # Processing and storing market data
│ ├── settings.py # Project settings
│ ├── spiders/ # Spider implementations
│ │ ├── __init__.py
│ │ ├── duotrope_spider.py # Duotrope crawler
│ │ ├── submittable_spider.py # Submittable crawler
│ │ ├── custom_spider.py # Custom site crawler
│ │ └── firecrawl_spider.py # FireCrawl integration
│ └── utils/
│ ├── __init__.py
│ ├── extractors.py # Functions for extracting data
│ ├── formatters.py # Text formatting utilities
│ └── history.py # Market update history tracking
So I let it get to work. I had to watch it and tell it to “Continue” every once in a while and when it was done, it gave me information on the new scraper in enough detail that I had to tell it to continue in the middle. The first part:
The second part told me how to use the new web scraper and how to extend it:
Issues I Ran Into
I have learned to expect issues and to QA the changes that Claude makes often. I am currently turning this into a process I use over and over to streamline things, but like I said, I switched to another tool, but some things I learned here are still useful if you want to try using Claude Desktop for this. Here are some issues I had before it actually ran:
no module named python-dotenv
I just told it this happened, and it fixed it.
...
The installed reactor (twisted.internet.selectreactor.SelectReactor) does not match the requested one (twisted.internet.asyncioreactor.AsyncioSelectorReactor)
...
The complete error was rather long. I pasted the whole thing in and it added missing imports.
About this time, it figured out it messed some things up, so it started running and testing it itself. It found this error:NameError: name 'unicode' is not defined
. Then fixed it itself. Not sure what triggered this, but there are times I wish it would just do this. But you just have to be ready for it when you are running a Scrapy with Playwright scraper in debug mode, because you will only intermittently have control over the cursor.
After that the scraper ran, so I tested one job and it didn’t find any data, so I asked about that and promptly got told that I reached the limit of the chat. I had not yet discovered the solution I am currently using yet, so I figured I should…
Use Projects When Vibe Coding with Claude Desktop
I quickly realized that trying to build this project chat by chat was going to be a pain in the ass, so I created a project for it. First, I exported the two chats I created with the Claude Magic Export Chrome extension and uploaded them as Project Knowledge in the new project. I think there are better ways to export these now and I thought I found a way to just adding existing chats as Project Knowledge, but now I can’t find it and I might be lying. I also created a markdown file with a breakdown of the project and its file structure. I figured I would add more things as I thought of them.
More Issues with the Scrapy Web Scraper Claude Created
So once I created the Claude project, I continued debugging the scraper. When I asked it about the job that wasn’t collecting data, it suggesting adding verbose logging and a bunch of debugging logs, so I went to check out the site before doing all that. It was a single page app. I told it that pure Scrapy might not work for it. I knew adding Playwright was the answer, but just asked to see what it would say. It came to the same conclusion.
It wrote the code for that. I ran it again, and it still did not collect any data. It suggested the debugging method again. I looked at the code before doing this and saw the issue. The Scrapy part of the scraper was using different selectors than the Playwright part. When it added the Playwright scraper, it just dumped a bunch of generic selectors in them. It actually took a few messages to get it to see what I was seeing, to the point I asked “You are looking at my code right, with file access?” at one point.
Then I had to tell it which set of selectors was correct even though it had created both of them. After that runaround, the scraper finally worked in Python and actually scraped data.
Then I figured out it was only scraping the first page of anything paginated, so in that same new chat I started trying to get it to fix that problem and eventually got told the chat was too long again before I got it fixed.
After that I limited a chat in the Project to one bug or feature. I also started wondering what I should add to Project Knowledge, so I didn’t start from scratch with every new chat.
I had it do a few other things that day and realized this was still going to be a lot of work. I was scraping a few different sites and wanted the data I was getting to be normalized.
And I explore this type of stuff and write articles about it when I have gaps in my freelance writing work and about that time, I got a batch of new writing projects. And by the time I got back to this, I started wondering about using another tool that wouldn’t have as many issues remembering parts of large projects.
In the End, I Switched to Claude Code
I actually learned about Claude Code from a dialog box on the Claude AI website, so I thought I’d try it. There were not very many tutorials at the time, and I wanted to start a project from scratch with it. Most of what I was reading said it “knows your codebase” and “can help you fix bugs.” This was not telling me what I wanted to know, so I installed it, add money to my Claude API account, and asked it, “Are you only for existing codebases or can you help me build a project from scratch.”
It was on. I wanted to start with a new project so I would have a new slate and could see what it can do. I do plan on using Claude Code with this project, eventually. So my next article in this category will cover building an Electron desktop app with Claude Code. I have actually made a lot of progress on it already. The app is functional enough to use for what I built it for.
Also, it is much easier and less haphazard to use for coding than Claude Desktop. I have become a project manager and QA, while it does most of the work. I have a process that allows me to brainstorm, vet, develop, and test new features in a cycle that doesn’t allow many bugs to get in or wrong turns. And this new project already is more complex and has made it farther in about the same amount of time this first project took. And as soon as I get all of my notes together, I’ll write that post. Stay tuned.