A couple of weeks in the past, I ran my standard suite of programming tests in opposition to the free model of the Perplexity.ai chatbot. At the end of the article, I supplied to run exams in opposition to the $20/mo Pro version if sufficient of you had been . I did get some requests, so that is what we’re doing right here.
Additionally: How I test an AI chatbot’s coding ability – and you can too
Like most different professional variations, to make use of Perplexity Professional, you must create an account. You may sign up utilizing both Google or Apple auth strategies or a SAML sign-in. Alternatively, you possibly can create an account utilizing your e-mail deal with, which is what I did.
Sadly, the location would not appear to present you any method to set a password or any type of multifactor authentication. You are despatched an e-mail with a code, and that is it. I do not thoughts getting an e-mail code, however I am actually disturbed by net apps relying solely on an e-mail code with out, a minimum of, a password. However that is what Perplexity.AI is doing.
Additionally: 5 reasons why I prefer Perplexity over every other AI chatbot
The opposite attention-grabbing facet of Perplexity Professional is its cornucopia of AI fashions. As you possibly can see within the picture under, you possibly can select between quite a lot of totally different fashions, primarily based on the sort of work you could have. I selected Default to see what that did with the exams. After operating the exams, I requested Perplexity Professional what mannequin it used for them, and it advised me ChatGPT GPT-4.
And with that, let’s run some exams.
1. Writing a WordPress plugin
This problem is a reasonably easy programming activity for anybody with a modicum of net programming expertise. It presents a person interface within the administration dashboard with two fields: one is an inventory of names to be randomized, and the opposite is the output.
The one actual gotcha is that the checklist of names can have duplicates, and fairly than eradicating the additional names, its directions are to ensure the duplicate names are separated from one another.
Additionally: How to use ChatGPT to write code: What it can and can’t do for you
This was an actual, requested operate that my spouse wanted to make use of for her e-commerce web site. Each month, they do a wheel spin and a few folks qualify for a number of entries.
Utilizing Perplexity Professional’s default mannequin, the AI succeeded in producing a workable person interface and purposeful code, offering each a PHP block and a JavaScript block to manage the textual content areas and the randomization logic.
Listed below are the combination outcomes of this and former exams:
- Perplexity Professional: Interface: good, performance: good
- Perplexity: Interface: good, performance: good
- Claude 3.5 Sonnet: Interface: good, performance: fail
- ChatGPT utilizing GPT-4o: Interface: good, performance: good
- Microsoft Copilot: Interface: ample, performance: fail
- Meta AI: Interface: ample, performance: fail
- Meta Code Llama: Full failure
- Google Gemini Superior: Interface: good, performance: fail
- ChatGPT utilizing GPT-4: Interface: good, performance: good
- ChatGPT utilizing GPT-3.5: Interface: good, performance: good
2. Rewriting a string operate
For every take a look at, I open a brand new session with the AI. On this take a look at, I am asking the AI to rewrite a block of code that had a bug. The code was designed to validate the enter of {dollars} and cents, which ought to comprise a sure variety of digits earlier than the decimal level, a doable decimal level, and two digits after the decimal level.
Additionally: Yikes! Microsoft Copilot failed every single one of my coding tests
Sadly, the code I shipped solely allowed integer numbers. After a few person reviews, I made a decision to feed the code to the AI for a rewrite. My code makes use of common expressions, that are a formulaic means of specifying a format. Common expressions themselves are enjoyable, however debugging them is just not.
Within the case of this take a look at, Perplexity Professional did a superb job. The resultant validation code correctly flagged objects that didn’t match the format for {dollars} and cents, permitting as much as two digits after the decimal.
Listed below are the combination outcomes of this and former exams:
- Perplexity Professional: Succeeded
- Perplexity: Succeeded
- Claude 3.5 Sonnet: Failed
- ChatGPT utilizing GPT-4o: Succeeded
- Microsoft Copilot: Failed
- Meta AI: Failed
- Meta Code Llama: Succeeded
- Google Gemini Superior: Failed
- ChatGPT utilizing GPT-4: Succeeded
- ChatGPT utilizing GPT-3.5: Succeeded
3. Discovering an annoying bug
This take a look at had me stumped for a couple of hours. Earlier than it was a take a look at, it was a bug within the code for an precise product. The issue was that no matter was going mistaken wasn’t associated to any apparent logic or language concern.
Additionally: I asked ChatGPT to write a WordPress plugin I needed. It did it in less than 5 minutes
Being severely annoyed, I made a decision to feed ChatGPT the code in addition to the error dump and ask it for assist. Luckily, it discovered what I had finished mistaken and gave me steering on what to repair.
The rationale I am together with this within the set of exams is as a result of the bug wasn’t in language or logic, it was in information of the WordPress framework. Whereas WordPress is fashionable, framework information is usually thought-about the folklore of a programming setting, one thing handed down from developer to developer, fairly than one thing that might be rigorously discovered by a information base.
Nevertheless, ChatGPT, in addition to Perplexity and now Perplexity Professional, did discover the issue. The error was on account of a parameter calling concern buried within the framework itself. The plain reply, which you would possibly provide you with strictly by studying the error messages generated by the code, was truly mistaken.
Additionally: Uber One subscribers get a free year of Perplexity Pro. Here’s how to claim it
To unravel it, the AI needed to present a deeper understanding of how all of the techniques work collectively, one thing with Perplexity Professional did efficiently.
Listed below are the combination outcomes of this and former exams:
- Perplexity: Succeeded
- Perplexity Professional: Succeeded
- Claude 3.5 Sonnet: Succeeded
- ChatGPT utilizing GPT-4o: Succeeded
- Microsoft Copilot: Failed
- Meta AI: Succeeded
- Meta Code Llama: Failed
- Google Gemini Superior: Failed
- ChatGPT utilizing GPT-4: Succeeded
- ChatGPT utilizing GPT-3.5: Succeeded
4. Writing a script
Properly, that is attention-grabbing. Perplexity Professional handed this take a look at, however the free version of Perplexity failed after I examined it a few weeks in the past. So, yay!
However let’s dive into this a bit. The problem right here is that I ask the AI to write down a script that intersects three environments: the Chrome DOM (doc object mannequin), AppleScript (Apple’s native scripting language), and Keyboard Maestro (a really cool Mac automation software that is pretty obscure, however to me, mission vital).
Additionally: Can Meta AI code? I tested it against Llama, Gemini, and ChatGPT – it wasn’t even close
A lot of the AIs failed as a result of they did not have any info on Keyboard Maestro of their information bases and, as such, did not give the mandatory code for the script to do what I wished.
Solely Gemini Advanced and ChatGPT utilizing GPT-4 and GPT-4o handed this take a look at till now. In answering the query, Perplexity Professional offered a Professional Search view. As you possibly can see, the Professional Search view did a seek for “Keyboard Maestro AppleScript Google Chrome tabs.” It additionally used the main Keyboard Maestro forum as a supply, which is one of the best supply for getting Keyboard Maestro coding assist.
The consequence was successful.
Listed below are the combination outcomes of this and former exams:
- Perplexity Professional: Succeeded
- Perplexity: Failed
- Claude 3.5 Sonnet: Failed
- ChatGPT utilizing GPT-4o: Succeeded however with reservations
- Microsoft Copilot: Failed
- Meta AI: Failed
- Meta Code Llama: Failed
- Google Gemini Superior: Succeeded
- ChatGPT utilizing GPT-4: Succeeded
- ChatGPT utilizing GPT-3.5: Failed
Total outcomes
Listed below are the general outcomes of the 4 exams:
As you possibly can see, Perplexity Professional joins solely ChatGPT with GPT-4 and GPT-4o as having an ideal rating of 4 out of 4 succeeded. After operating my exams, I checked with Perplexity Professional’s AI and it knowledgeable me it used GPT-4 to investigate and reply to my exams.
Provided that GPT-4/4o is the one AI that nailed all 4 of my exams earlier than, this is sensible. To date, I have never discovered every other mannequin that may absolutely and accurately cross all 4 programming exams.
Additionally: How to run dozens of AI models on your Mac or PC – no third-party cloud needed
Should you select Perplexity Professional, I can pretty confidently state that it ought to be capable to do a superb job of serving to you program.
Have you ever tried coding with Perplexity, Copilot, Meta AI, Gemini, or ChatGPT? What has your expertise been? Tell us within the feedback under.
You may observe my day-to-day undertaking updates on social media. Make sure to subscribe to my weekly update newsletter, and observe me on Twitter/X at @DavidGewirtz, on Fb at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.