Using AI as a Programmer: A piece of code is worth a thousand words

Video thumbnail

Can You Code Only with AI?

The question is often repeated on forums and networks: is it worth learning to code if AI exists? The short answer is yes, it's still essential. AI can generate code blocks, but without understanding the logic behind them, everything turns into a black box. In my case, I confirmed this when I asked ChatGPT for help with a Django Admin model: the solution seemed correct, but it didn't work when I tested it. That's when I understood that if you don't know how to code, you can't validate what the AI returns.

The "Black Box" Illusion

Many believe it's enough to describe what they want and the AI will do everything. This can work for quick prototypes, but relying only on it is dangerous. If you can't read or adapt the code, any error can block you.

Why Continuing to Learn Code is Essential

Knowing how to code allows you to correct, adapt, and improve what the AI generates. Furthermore, without that knowledge, you wouldn't even know what to ask for or how to validate the answers.

Advantages and Risks of Using AI for Programming

What AI Does Well in Programming

  • Generate quick code examples.
  • Explain unknown functions or libraries.
  • Accelerate the learning curve in a new framework.

In my experience, a well-given code snippet to the AI is like gold: "a code snippet is worth a thousand words."

Common Errors and Limitations

  • AI can make mistakes with technical details.
  • It doesn't always understand the full context.
  • It can give more complex solutions than necessary (as happened to me in Django).

In short: AI does not replace human judgment.

The Role of the Programmer vs. Artificial Intelligence

From Idea Translator to Code Validator

The programmer's role changes: before, we wrote everything from scratch; now we also review and adapt what the AI proposes. The key skill is knowing how to evaluate.

Human Judgment as the Final Word

A code snippet is worth a thousand words

When I want a precise solution, I don't just write: "make me a CRUD in Django." I prefer to pass it code and add a brief explanation. This way, the answer starts from a solid context.

I saw it clearly when I compared several AIs. The difference was not so much in the quality of the response, but in my ability to decide which solution made sense.

To code, I usually use the AI by presenting a code block  followed by a brief explanation of what I want:

@admin.register(Payment) 
class PaymentAdmin(admin.ModelAdmin): 
   list_display = ('id', 'user', 'orderId', 'price') 
   
class Payment(models.Model): 
   content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE) 
   object_id = models.PositiveIntegerField() 
   paymentable = GenericForeignKey('content_type', 'object_id') 

This is key for me because I give it a lot of context through the code and tell it what I want it to solve for me.

For me, a key phrase here is: "a code snippet is worth a thousand words."

Therefore, instead of explaining everything to it, I simply passed it:

  • The main code of the PaymentItem.
  • The relationship I was using.
  • A brief explanation of what I wanted.

This type of prompt almost always works for me.

The solution it gave me was to use an internal method of Django Admin to modify these fields. It explained the logic well, but it didn't work because the returned data type was incorrect. Instead of simplifying, it complicated the query more:

def formfield_for_foreignkey(self, db_field, request, **kwargs):
    if db_field.name == "content_type":
        allowed_models = [Product, Book]
        allowed_cts = ContentType.objects.get_for_models(*allowed_models).values()
        kwargs["queryset"] = ContentType.objects.filter(id__in=[ct.id for ct in allowed_cts])
    return super().formfield_for_foreignkey(db_field, request, **kwargs)

From this I draw several conclusions:

  • ChatGPT is not perfect. It can be wrong even with simple queries.
  • AI is an assistant, not a magic solution. You have to evaluate what it returns and adapt it.
  • The prompt matters. A well-placed code snippet is worth more than many explanations.
  • You have to know how to code. If you don't understand what you're asking for, how are you going to validate the answer?
  • You should always use more than one tool (AI in this example).

This is the key for me in using AI as a tool, and the best use we can give it as developers is for us to first know how to code to squeeze out the best result

Many who generate apps in seconds with AI don't really know what's happening in the code.
That is dangerous because they see everything as a black box.

Comparison with other AIs: Gemini and Perplexity

I passed it the same prompt and it returned a solution that I didn't quite understand at first. It created an additional attribute with rules, which I wasn't convinced by. Afterwards, I noticed that it did apply it to the model, but I still didn't like the syntax or hadn't understood it initially:

limit = Q(app_label='your_app_name', model='product') | Q(app_label='your_app_name', model='book') content_type = models.ForeignKey(ContentType, on_delete=models.CASCADE, limit_choices_to=limit)

A problem with Gemini: it doesn't maintain context as well. With ChatGPT, I can tell it "what is limit" and it understands. Gemini, on the other hand, often doesn't connect with what was said before and gave me an explanation of what 'limit' was in SQL, which had nothing to do with the initial query...

Then I went to Perplexity, which almost no one mentions, and it was the most accurate.
Its response was exactly what I needed:

content_type = models.ForeignKey(
        ContentType,
        on_delete=models.CASCADE,
        limit_choices_to=Q(app_label='mi_app', model='product') | Q(app_label='mi_app', model='book')
    )

I needed to limit a generic field in my Payment model to only accept Product or Book. I passed it the code and asked for help. ChatGPT gave me an option with formfield_for_foreignkey that didn't work completely. Gemini offered me another one, but lost the thread of the context. Finally, Perplexity returned exactly what I needed with limit_choices_to. That contrast showed me that using multiple AIs in parallel is key.

Practical Comparison: ChatGPT vs Gemini vs Perplexity in Programming

ChatGPT: good in context, but makes mistakes

Advantage: understands when you follow up on a conversation.
Problem: can give erroneous solutions that appear correct.

Gemini: loses continuity

Advantage: quick and concise answers.
Problem: when asked about "limit," it answered about SQL, unrelated to the Django case. It loses context.

Perplexity: the most useful surprise

Advantage: suggests more tailored answers and with documentation.
In my case, it was the one that provided the correct solution to the ForeignKey problem.

Final Lessons

From all this, I'm left with 5 points:

  • A code snippet is worth a thousand words, as we can pass it the context of what we want and the response will be based on this code.
  • Knowing how to code is essential. Without it, you won't know what to ask for or how to validate, and with this, pass the code from point 1.
  • Don't stick with just one AI. Use several, compare them, and cross-reference results.
  • Human judgment is what makes the difference.
  • In programming, AI is a great help, but the developer always has the final say.
  • Don't stick with just one AI
    • Each tool has strengths and weaknesses. Using them in parallel allows for contrast.
  • Learn to code first
    • The foundation remains human knowledge. Without it, AI is just noise.
  • Use AI as a copilot, not as a pilot
    • The best combination is when you decide the direction and the AI speeds up the process.

FAQs

  • Is it advisable to learn to code if AI exists?
    Yes, because without basic knowledge you cannot validate or adapt what the AI generates.
  • Which AI is better for programming: ChatGPT, Gemini, or Perplexity?
    It depends on the case. ChatGPT is good in context, Gemini is more limited, and Perplexity was surprising for its accuracy.
  • Can you code without knowing code using only AI?
    You can, but you shouldn't. You'll be tied to a black box with no validation capability.
  • What are the risks of depending too much on AI in programming?
    Undetected errors, unnecessarily complex code, and total dependence on the tool.
  • How to write good prompts for programming with AI?
    Use code snippets, provide context, and ask for step-by-step explanations.

I agree to receive announcements of interest about this Blog.

We analyze an example in which, based on a given code snippet, what is the result produced by 3 AIs and the morals of all this: - A code snippet is worth a thousand words - Know what you are doing - The most famous AI gave the worst answer

| 👤 Andrés Cruz

🇪🇸 En español