Cadzow Knowledgebase

Contact Us

Site Search

Remote Support

Print Friendly

Security: Microsoft Remote Desktop Web Connection Cached by Search Engines

Microsoft provides a version of the Remote Desktop client as an ActiveX control and a webpage that hosts it. These files are installed on a web server and this enables a Remote Desktop / Terminal Services session to be launched from within a web browser.

The default page (default.htm) prompts for a server address, but the page can be modified to populate the server address (and listening port), so end-users do not need to know it.

If this page is referenced somewhere, such as your company's public-facing home page, search engine crawlers will find and index the page. This means that hackers can easily locate servers running Terminal Services, the ports they are listening on, and the code version, with a simple Google search:

… and many others.

This technique was used by the “Santy” worm to find and attack vulnerable installations of phpBB and there can't be any doubt that many other products may be susceptible to the same technique.

It may also give rise to manual hacking by attempting to exploit weak passwords.

However, this matter is not worth becoming hysterical over. This is not an exploitable fault or vulnerability. It's only purpose is to help hackers find Terminal Services hosts which may otherwise have been obscured and will only become a problem if an exploitable fault exists in Terminal Services now or in the future.

Ultimately it means that attacking Terminal Services hosts can be made much more efficient than simply probing random IP addresses, and makes using non-standard ports as an obscurity defence ineffective.

In any case, organisations hosting Terminal Services should ensure that their entry pages do not appear in search engines.


Remote Desktop web pages are essentially private and should not be indexed by search engines. This can be prevented as follows:

Method 1

Add the following line to default.htm, somewhere between <head> and </head> and after <title>:

<meta name="robots" content="none">

Then resubmit the page via Google's Remove Content utility or simply let the webcrawlers remove the page from their indexes on their next crawl.

Method 2

Add the path to robots.txt:

User-agent: *
Disallow: /tsweb/

Note: Method 1 is preferred because, although robots.txt is an instruction to search engine webcrawlers, it is a public file and should not contain hints about private locations on your web server.



  • 11/05/2005 — Microsoft agreed it was not a security vulnerability and passed on to product owners.

  • 17/07/2005 — Tweak to text.

Copyright © 1996-2023 Cadzow TECH Pty. Ltd. All rights reserved.
Information and prices contained in this website may change without notice. Terms of use.

Question/comment about this page? Please email