When people want to start incorporating e-commerce activities in their site, they must be PCI compliant to do so. There are many different companies/organizations out there that can help you determine if your web site is PCI compliant.
Many criteria must be met to be PCI compliant. One of those criteria is to setup a custom error page. Custom error pages are important because it hides the true error your application may display. The true error messages can be used to dissect and infer the back end design and structure of your web application. Information that can be used to find weaknesses and exploit your web application.
In IIS, there are two types of custom error handling that you can and will need to set on your site. One is the traditional IIS custom error that is normally processed on the IIS level. The other is the ASP.Net custom error handling that is processed on the ASP.Net application level. ASP.Net custom error handling is a bit tricky due to how it is processed through the handlers and depending on where it bombs out, can ultimately determine whether your ASP.Net custom error triggers.
In most cases setting up your ASP.Net custom error handling is straightforward. To achieve this, you simply add the “customErrors” element to your application’s web.config file. The main attribute you will need to ensure is the “mode” attribute within the “customErrors” element. You have three distinct choices. They are “On”, “Off”, or “RemotelyOnly”. You want to set it either for “On” or “RemoteOnly”.
Now here’s a caveat to being PCI compliant, some services will scan your site and test specific conditions to determine whether you have custom error handling enabled. And one of the conditions they may throw at your application is passing a value/URL to your site that is similar to :
which will not display your custom error page. However if you call a file that you know does not exist in your site:
you will find that your custom error page is displayed.
Confusing? Take a short breather and let me quickly explain.
Your ASP.Net custom error handling does work and the reaction your site invokes when someone calls on your site with the condition of “/[fakefile]?aspxerrorpath=/” is actually by design from Microsoft. This condition in the URL request will stop the process from reaching the custom error handler thus bombing out before it can display your custom error page.
So what’s the solution? The only work around that we can find is to setup a RequestFiltering rule to filter out the string “/[fakefile]?aspxerrorpath=/” from the URL string being called to your site.
In your web.config file input this code in your RequestFiltering element.
<requestFiltering> <denyQueryStringSequences> <add pre="" sequence="<span class=" hiddenspellerror="">aspxerrorpath=" /> </denyQueryStringSequences> </requestFiltering>
Another way to implement this rule is through the IIS 7 or IIS 8 Manager using the RequestFiltering module and setup the rule under the Query Strings tab with the query string “aspxerrorpath= “ set to Deny.
This will block the URL string “/[fakefile]?aspxerrorpath=/” whereby forcing it to call upon the file directly.
We have seen an influx of attacks against WordPress sites. The attack is an old method called brute force attack. The main targets are WordPress sites that still use the default administrative login “Admin.” With half of the credentials pretty much solved, the attacker repeatedly inputs a password until it finally finds the right one.
This lapse in security has been well known in the WordPress community. It has been asked by Tony Perez why WordPress themselves have not offered a stronger password restriction and require that the Admin login be changed; the response he and the WordPress community received was “it’s just not a relevant issue.”
The fix for this is fairly simple. First make sure you update the administrative credential from the default “Admin” user name to something more personal. Second step is to update the password to be more sophisticated. It is recommended that you have a minimum length of 8 characters, including letters, numbers, and special characters such as “#”, “$”, or “%”. Incorporating lower case and upper case characters in your password will also help strengthen it.
The exploit has had a substantial impact on web hosting companies like DiscountASP.NET. When a personal computer gets compromised, there is a limit in the bandwidth that computer may have, but with a web hosting company the bandwidth is almost unlimited. When a WordPress site is compromised, the hacker then uses that site to send out attacks on other servers and hosting companies.
With the nearly unlimited bandwidth at their disposal, the effects can be devastating. The owner of the account is affected as well. With high bandwidth consumption, they may be charged to pay extra for the bandwidth usage their WordPress site utilizes.
Another security measure that can be employed to mitigate this attack is to incorporating WordPress 2 step authentication. This is an optional new feature you can enable for your WordPress site. It uses the Google Authentication App.
It is a second verification input on top of the password that obtains a random generated code from Google Authentication App. This verification code is updated every 30 seconds making it impossible to guess. You may want to read more on this new security feature on this WordPress link.
Make no mistake that WordPress themselves are taking this attack seriously, and the effects have been wide spread among many hosting companies.
If you want to find out more about this wide spread attack against WordPress sites, here are a couple of links that you might find helpful:
Coincidentally this attack not only targets WordPress but Joomla web applications as well. I did not research any Joomla attacks, but if you have a Joomla site and you are using it’s default administrative login “Admin”, you may want to update the login name, and provide it a more complex password just in case.
There are two ways to do this. The easiest way is through IIS 7 Manager. You can use this Knowledge Base article for details on how to connect to our web server with IIS 7 Manager.
Once connected, go to the “Handler Mapping” module. Click on “Add Module Mapping.
Enter the settings as you see them. Now you have your application set to push all *.js files to Node.js.
You do not have to use IIS 7 Manager to create this mapping. You can directly code it within your application’s web.config file. What you will need to look for is the ‘system.webserver’ element within the configuration of the web.config file and add the handler line:
<configuration> <system.webServer> <handlers> <add name="iisnode" path="*.js" verb="*" modules="iisnode" /> </handlers> </system.webServer> </configuration>
There are a couple of work-arounds to this:
1. Upload the .js file you want to be processed by node.js to its own subfolder. Then add the web.config setting to that folder.
2. You can specify which .js file gets processed by the node.js engine. In the path field, rather than implementing a wild card, you can set: path=”somefile.js”.
It is often said that if you do not want your information to be stolen, don’t put it on the Internet. However, the Internet has become an integral part of our lives, and we can’t help but post some kind of web site, blog, or forum. Even if you don’t tell anyone about your web site, once it is published it will eventually be discovered.
How, you ask? By robot indexing programs, A.K.A. bots, crawlers and spiders. These little programs swarm out onto the Internet looking up every web site, caching and logging web site information in their databases. Often created by search engines to help index pages, they roam the Internet freely crawling all web sites all the time.
Normally this is an acceptable part of the Internet, but some search engines are so aggressive that they can increase bandwidth consumption. And some bots are malicious, stealing photos from web sites or harvesting email addresses so that they can be spammed. The simplest way to block these bots is to create a simple robots.txt file that contains instructions to block the bots:
However, there are a couple of things wrong with this approach. One is that bots can still hit the site, ignoring your robots.txt file and your wish not to be indexed.
But there is good news. If you are on an IIS 7 server, you have another alternative. You can use the RequestFiltering Rule that is built-in to IIS 7. It works on a higher level portion of the web service and it cannot be bypassed by a bot.
The setup is fairly simple, and the easiest and fastest way to initiate your ReqestFiltering Rule is to code it in your application’s web.config file. The RequestFiltering element goes inside the <system.webServer><security> elements. If you do not have this in your applications web.config file you should be able to create them. Once that is created type this schema to setup your RequestFiltering rule.
<requestFiltering> <filteringRules> <filteringRule name="BlockSearchEngines" scanUrl="false" scanQueryString="false"> <scanHeaders> <clear /> <add requestHeader="User-Agent" /> </scanHeaders> <appliesTo> <clear /> </appliesTo> <denyStrings> <clear /> <add string="YandexBot" /> </denyStrings> </filteringRule> </filteringRules> </requestFiltering> <authentication> <basicAuthentication enabled="true" /> <anonymousAuthentication enabled="true" /> </authentication>
You can name the filtering rule whatever you’d like and in the “requestHeader” element you will need to make sure you define “User-Agent.” Within the “add string” element you’ll need to specify the User Agent name. In this example I set it to YandexBot which blocks a search engine originating from Russia. You can also block search engines such as Googlebot or Bingbot.
If you want to see if this rule is actually blocking these bots, you will need to download your HTTP raw logs from the server and parse them to look for the headers User-Agent. If you scroll to the left and find the headers SC-Status (status code) you should see a 404 HTTP response. In addition the headers will also carry sc-substatus which will be a substatus code to the primary HTTP response code.
Here is a list of potential substatus codes you may see when you impose your RequestFiltering rule.
I would like to introduce you to our staff members so you can see who is on the other side of the support tickets and forum posts. Today we have a support staff member who has been with us for almost a year now, Ray.
My name is Ray, and I am a member of the DiscountASP.NET technical support staff. I was promised booty, bounty, and a pony to work here so I accepted the offer.
But on a more serious note, I graduated UCLA with a Bachelor’s Degree and worked as a systems/network administrator for the UCLA Fowler Museum for over 3 years before I decided to pursue some other interests of mine. I hold a number of technical certifications from both UCLA and Microsoft and have experience working with different technologies including Microsoft, Novell, and Unix/Linux. I also have some experience in designing websites and programming in languages such as Pascal, C++, C#, Ruby, and Unix/Linux shell scripting.
My former hobbies included watching anime, attending conventions, and gaming (please don’t mention MMORPG). My new hobbies include pumping iron at the gym and playing the guitar (acoustic).
My technical interests are Microsoft SQL Server, ASP.NET, and virtualization, and I hope to use all my skills and experience to provide the best technical support to our customers here at DiscountASP.NET