Full GEO Report for https://jobs.psgglobalsolutions.com/

Detailed Report:

GEO Assessment — jobs.psgglobalsolutions.com/

(Score: 0%) — 05/14/26


Overview:

On 05/14/26 jobs.psgglobalsolutions.com/ scored 0% — **Very Poor** – This run didn’t return usable results across the sections, so it’s hard to get a clear read on AI visibility right now.

Website Screenshot

Executive summary

Across the areas reviewed, the report output shows errors instead of clear findings for Discoverability, Structured Data, AI Readiness, Performance, Reputation, and LLM-Ready Content. Because the issues show up across multiple sections at once, the overall picture from this run is limited rather than mixed or category-specific.

Score Breakdown (High Level)

  • Discoverability: 0% - Error calculating score: Task <Task pending name='Task-5451' coro=<score_individual() running at /var/www/v9_geo_grader/apps/grader/services/scoring.py:175> cb=[gather.<locals>._done_callback() at /home/v9_geo_grader_user/.lo
  • Structured Data: 0% - Error calculating score: Task <Task pending name='Task-5452' coro=<score_individual() running at /var/www/v9_geo_grader/apps/grader/services/scoring.py:175> cb=[gather.<locals>._done_callback() at /home/v9_geo_grader_user/.lo
  • AI Readiness: 0% - Error calculating score: Task <Task pending name='Task-5453' coro=<score_individual() running at /var/www/v9_geo_grader/apps/grader/services/scoring.py:175> cb=[gather.<locals>._done_callback() at /home/v9_geo_grader_user/.lo
  • Performance: 0% - Error calculating score: Task <Task pending name='Task-5454' coro=<score_individual() running at /var/www/v9_geo_grader/apps/grader/services/scoring.py:175> cb=[gather.<locals>._done_callback() at /home/v9_geo_grader_user/.lo
  • Reputation: 0% - Error calculating score: Task <Task pending name='Task-5455' coro=<score_individual() running at /var/www/v9_geo_grader/apps/grader/services/scoring.py:175> cb=[gather.<locals>._done_callback() at /home/v9_geo_grader_user/.lo
  • LLM-Ready Content: 0% - Error calculating score: Task <Task pending name='Task-5456' coro=<score_individual() running at /var/www/v9_geo_grader/apps/grader/services/scoring.py:175> cb=[gather.<locals>._done_callback() at /home/v9_geo_grader_user/.lo

Where things stand in this run

The main takeaway here is that the report didn’t produce usable findings in any of the evaluated sections, so there isn’t a clear snapshot of your current AI visibility from this pass. That’s less about “good vs. bad” signals and more about missing clarity in the output itself. Below, you’ll see a section-by-section breakdown showing where the report returned errors instead of normal insights. Once you have a clean run, this same layout will make it easy to spot which areas are actually driving the biggest gaps.

Detailed Report

Discoverability

❌ Section results could not be generated

What we saw

The report returned an “Error calculating score” message for this section instead of a normal set of findings. No section-specific details were provided to summarize what was found.

Why this matters for AI SEO

When this section doesn’t return results, you don’t get a reliable picture of how easily your site can be found and understood in AI-driven discovery contexts. It also limits how confidently you can interpret the rest of the report.

Next step

Re-run the grader to confirm whether the section can be evaluated cleanly and returns a complete set of findings.

Structured Data

❌ Section results could not be generated

What we saw

The report shows an “Error calculating score” message for this section, with no additional section notes. There isn’t enough output here to describe what was present versus missing.

Why this matters for AI SEO

Without a usable readout for this area, it’s harder to judge how clearly your pages communicate key details to systems that summarize and compare entities. That uncertainty makes overall visibility harder to assess.

Next step

Re-run the grader to see if this section produces complete, readable findings on the next pass.

AI Readiness

❌ Section results could not be generated

What we saw

This section also returned an “Error calculating score” message rather than normal insights. The detailed section output is blank, so there’s nothing concrete to summarize from this run.

Why this matters for AI SEO

AI readiness is where you’d normally expect a clear, plain-language read on how well your site content can be interpreted and reused in AI answers. If the section doesn’t render results, that clarity is missing.

Next step

Re-run the grader so you can get a complete readout for this section before drawing conclusions.

Performance

❌ Section results could not be generated

What we saw

The report indicates an “Error calculating score” for Performance, with no supporting narrative in the detailed section. That means this run didn’t provide any usable performance-related findings to interpret.

Why this matters for AI SEO

If this section can’t be evaluated, it creates a blind spot around whether the on-page experience supports consistent crawling and interpretation. That uncertainty can muddy the overall AI visibility picture.

Next step

Re-run the grader to confirm whether the performance section can be scored and summarized correctly.

Reputation

❌ Section results could not be generated

What we saw

This section shows an “Error calculating score” message and no detailed report content. As a result, there’s no section output here to summarize from the current run.

Why this matters for AI SEO

Reputation signals help AI systems decide what to trust and cite when multiple sources cover similar topics. When this section doesn’t return results, you lose an important part of the overall visibility story.

Next step

Re-run the grader to see if the reputation section can be assessed and reported normally.

LLM-Ready Content

❌ Section results could not be generated

What we saw

The report returned an “Error calculating score” message for LLM-Ready Content, and the detailed section is empty. That means this run didn’t produce any specific content-focused findings to review.

Why this matters for AI SEO

This section is typically where you’d see whether your content is easy for AI systems to summarize accurately and attribute correctly. Without output here, it’s hard to gauge how well the content is positioned for AI answers.

Next step

Re-run the grader to confirm the content section can be evaluated and returns a complete set of insights.

Does Anything Seem Off?

Thanks for taking our free GEO Grader for a spin. When we started this journey, the tool had a fairly long processing time to check everything we wanted both onsite and offsite, so we made a few adjustments on the backend to speed things up. As a result, there are times when the grader may not get everything 100% right. If something feels off, we recommend running the tool a second time to confirm the results. From there, you’re always welcome to reach out to us to schedule a GEO consultation, or to have your SEO provider validate the findings with a more detailed crawl and manual review.

Share This Report With Your Team

Enter email addresses to send this assessment report to colleagues