PassMark Logo
Home » Forum

Announcement

Collapse
No announcement yet.

I'm expecting bad news - JavaScript generated pages

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • sadkkf
    replied
    Hey, thank you very much for the info.

    As I said, there's a long history to the development of the site. It's made a lot of changes since its first concepts. This includes not only the design, but the intended audience as well. The inclusion of JS was not part of the original concept.

    Anyway, the problem still exists. I understand the options you've presented and can't thank you enough for the input. I'll make some suggestions to my customer and see how they'd like to proceed.

    Thank you again!

    :kevin

    Leave a comment:


  • Ray
    replied
    We had a look at the site. You have some static HTML pages per section, and those can be indexed. But you do have Javascript-only content as well, and that won't be indexed by any search solution (for example, the content for each "year" that appear when you hover over them on the "About Us" page)

    You can have a fair idea of what your site will look like to a spider (or anyone with a limited browser - or just a browser with JS disabled) by opening up Firefox, clicking on "Tools"->"Options"->"Content" and unchecking the "Enable JavaScript" option. Now load up your website and see what's missing.

    There are disadvantages to Javascript and AJAX approaches that you should be aware of before adopting them for use. They're not appropriate for all scenarios.

    You might be able to get away with using <noscript> tags and offering alternate versions of much of this content.

    On the "Cheeses" page, the JavaScript only serves links to actual static HTML pages for each type of cheese. So while the spider won't pick up these links (because they're generated on the fly by JavaScript), you can make the spider index these files either by using Offline Mode or by adding links via the <noscript> block.

    Please see this FAQ:
    Q. Why are links in my Javascript menus being skipped?

    Originally posted by sadkkf View Post
    And I'm not seeing all of the pages in the log when I index. How does Zoom know to scan a page? Does it grab a list of all the files from the server or does it follow the links?
    See these FAQs:
    Q. Why are some of my pages being skipped by the indexer?
    Q. I am indexing with spider mode but it is not finding all the pages on my web site

    If you are using Spider Mode, it will follow the links. If you are using Offline Mode, it will scan all the appropriate files in your folder.

    See the Users Guide for more information on the indexing modes:
    http://www.wrensoft.com/zoom/usersguide.html

    Leave a comment:


  • sadkkf
    replied
    www.rothkase.com

    They were very excited about the hover-style navigation they wanted it on all the pages. There's a history to the development of this site and it's not over yet and I fear if I mention this, they'll be unhappy. They're great people, but eager to finish.

    And I'm not seeing all of the pages in the log when I index. How does Zoom know to scan a page? Does it grab a list of all the files from the server or does it follow the links?

    Leave a comment:


  • Ray
    replied
    Kevin - if you can give us a URL to the site in question, we can probably tell you a bit more definitively.

    Leave a comment:


  • David
    replied
    If all your important content is in a single file, then no conventional search function is going to help you. It will also destroy any chance of getting a good ranking in Google (in my opinion), which for many online businesses is enough to put them out of business.

    This is what Google says, "..because most search engine spiders see your site much as Lynx would. If fancy features such as JavaScript, cookies, session IDs, frames, DHTML, or Flash keep you from seeing all of your site in a text browser, then search engine spiders may have trouble crawling your site".

    Leave a comment:


  • sadkkf
    replied
    They have half a dozen or so pages that bring in an XML file detailing their product info and about all of their important information. I'll have to run some more tests, but I'm sure they're going to be disappointed with the results.

    How much would the meta tags help with this?

    :kevin

    Leave a comment:


  • David
    replied
    If the site only has a single HTML page, and the content is all generated client side as a result of Javascript being executed, then yet it is bad news for all search engines. (and bad news for people wanting to bookmark pages on this site, E-mail links to friends, etc..)

    But PHP/SQL pages are no problem.

    Leave a comment:


  • sadkkf
    started a topic I'm expecting bad news - JavaScript generated pages

    I'm expecting bad news - JavaScript generated pages

    Hi--

    A customer wants a search engine on their site, but I'm pretty sure it won't work. Most of their pages have content coming in from an XML file via JavaScript. Other pages are created via PHP/MySql.

    Is this doomed to fail?

    :kevin
Working...
X