Serious Joomla security hole

On August 15, 2013, in Announcements, How-to, Technical, by Raymond

Ray PenalosaIn case you haven’t heard, Joomla versions 2.5.x and 3.1.x have a big security hole that will allow any user who browses your site to upload malicious PHP files regardless of their permission.

Joomla is a Content Management System (CMS) that can be installed on a web site. It isn’t part of your hosting account by default, so if you haven’t installed it, you are not at risk for this particular security issue.

Joomla already has a filtering mechanism to prevent files with certain extensions from being uploaded through the application. However, the bug allows files with a trailing dot ( . ) to bypass the filter mechanism. Therefore, someone can create a file with a .php extension and include a period at the end, and the malicious file will be uploaded, after which the server will process the file as a PHP application. For example, normally Joomla will not allow a file called somefile.php to be uploaded. However, somefile.php. with a trailing dot, can be uploaded to the affected versions.

Versafe, an online fraud protection company, discovered this serious exploit. If your Joomla application becomes ompromised, your application either can become a phishing site, redirecting browsers to a malicious site that can steal personal information or it can become a repository for malware and Trojan programs infecting anyone who calls on that page.

Resolution

To solve this problem you should download the latest version of Joomla, and upgrade to the latest version. If upgrading is not an option for you, you can include a line of code that will automatically strip the trailing dot ( . ) from the file name before upload begins, so the upload cannot bypass the media management filtering mechanism.

The file you will need to modify is the file.php file. The file is located under Libraries/Joomla/FileSystem. Within the function makeSafe, add the line:

// Remove any trailing dots, as those aren’t ever valid file names.

$file = rtrim($file, ‘.’);

If this line already exists under file.php, then the exploit has been closed and your Joomla application should be immune from this security hole.

What DiscountASP is doing

On our end, as of today we have updated our Web App Gallery to the latest version of Joomla that already has this patch. Therefore, if you downloaded and installed your Joomla application through our Web App Gallery today, your Joomla application should be protected from this exploit.

However, this security hole is considered to be a “Zero-day exploit” which means that the vendor of the application was unable to react before millions of Joomla applications became susceptible to the security threat. If your site runs the affected versions of Joomla, the chances that your web application is vulnerable to this threat are high and you should take immediate action.

I also encourage you to read these articles for more details on the Joomla exploit.

http://www.versafe-login.com/?q=versafe-discovers-critical-joomla-exploit

http://joomlacode.org/gf/project/joomla/tracker/?action=TrackerItemEdit&tracker_item_id=31626

http://www.thewhir.com/web-hosting-news/joomla-users-urged-to-apply-critical-security-patch-to-prevent-malware-phishing

 

Ray PenalosaWhen people want to start incorporating e-commerce activities in their site, they must be PCI compliant to do so.  There are many different companies/organizations out there that can help you determine if your web site is PCI compliant.

Many criteria must be met to be PCI compliant.  One of those criteria is to setup a custom error page.  Custom error pages are important because it hides the true error your application may display. The true error messages can be used to dissect and infer the back end design and structure of your web application.  Information that can be used to find weaknesses and exploit your web application.

In IIS, there are two types of custom error handling that you can and will need to set on your site. One is the traditional IIS custom error that is normally processed on the IIS level. The other is the ASP.Net custom error handling that is processed on the ASP.Net application level.  ASP.Net custom error handling is a bit tricky due to how it is processed through the handlers and depending on where it bombs out, can ultimately determine whether your ASP.Net custom error triggers.

In most cases setting up your ASP.Net custom error handling is straightforward. To achieve this, you simply add the “customErrors” element to your application’s web.config file.  The main attribute you will need to ensure is the “mode” attribute within the “customErrors” element.  You have three distinct choices.  They are “On”, “Off”, or “RemotelyOnly”.  You want to set it either for “On” or “RemoteOnly”.

Now here’s a caveat to being PCI compliant, some services will scan your site and test specific conditions to determine whether you have custom error handling enabled.  And one of the conditions they may throw at your application is passing a value/URL to your site that is similar to :

https://www.domain.com/[fakefile]?aspxerrorpath=/

which will not display your custom error page. However if you call a file that you know does not exist in your site:

https://www.domain.com/[fakefile]

you will find that your custom error page is displayed.

Confusing? Take a short breather and let me quickly explain.

Your ASP.Net custom error handling does work and the reaction your site invokes when someone calls on your site with the condition of “/[fakefile]?aspxerrorpath=/” is actually by design from Microsoft.  This condition in the URL request will stop the process from reaching the custom error handler thus bombing out before it can display your custom error page.

So what’s the solution?  The only work around that we can find is to setup a RequestFiltering rule to filter out the string “/[fakefile]?aspxerrorpath=/” from the URL string being called to your site.

In your web.config file input this code in your RequestFiltering element.

<requestFiltering>
 <denyQueryStringSequences>
 <add pre="" sequence="<span class=" hiddenspellerror="">aspxerrorpath=" />
 </denyQueryStringSequences>
 </requestFiltering>

Another way to implement this rule is through the IIS 7 or IIS 8 Manager using the RequestFiltering module and  setup the rule under the Query Strings tab with the query string “aspxerrorpath= “ set to Deny.

PCI-customerror

This will block the URL string “/[fakefile]?aspxerrorpath=/” whereby forcing it to call upon the file directly.

 

WordPress under attack

On April 16, 2013, in Announcements, Technical, by Raymond

Ray Penalosa

We have seen an influx of attacks against WordPress sites. The attack is an old method called brute force attack. The main targets are WordPress sites that still use the default administrative login “Admin.” With half of the credentials pretty much solved, the attacker repeatedly inputs a password until it finally finds the right one.

This lapse in security has been well known in the WordPress community. It has been asked by Tony Perez why WordPress themselves have not offered a stronger password restriction and require that the Admin login be changed; the response he and the WordPress community received was “it’s just not a relevant issue.”

The fix for this is fairly simple. First make sure you update the administrative credential from the default “Admin” user name to something more personal. Second step is to update the password to be more sophisticated. It is recommended that you have a minimum length of 8 characters, including letters, numbers, and special characters such as “#”, “$”, or “%”. Incorporating lower case and upper case characters in your password will also help strengthen it.

The exploit has had a substantial impact on web hosting companies like DiscountASP.NET. When a personal computer gets compromised, there is a limit in the bandwidth that computer may have, but with a web hosting company the bandwidth is almost unlimited. When a WordPress site is compromised, the hacker then uses that site to send out attacks on other servers and hosting companies.

With the nearly unlimited bandwidth at their disposal, the effects can be devastating. The owner of the account is affected as well. With high bandwidth consumption, they may be charged to pay extra for the bandwidth usage their WordPress site utilizes.

Another security measure that can be employed to mitigate this attack is to incorporating WordPress 2 step authentication.  This is an optional new feature you can enable for your WordPress site. It uses the Google Authentication App.

It is a second verification input on top of the password that obtains a random generated code from Google Authentication App. This verification code is updated every 30 seconds making it impossible to guess. You may want to read more on this new security feature on this WordPress link.

Make no mistake that WordPress themselves are taking this attack seriously, and the effects have been wide spread among many hosting companies.

If you want to find out more about this wide spread attack against WordPress sites, here are a couple of links that you might find helpful:

http://ma.tt/2013/04/passwords-and-brute-force/

http://www.bbc.co.uk/news/technology-22152296

Coincidentally this attack not only targets WordPress but Joomla web applications as well. I did not research any Joomla attacks, but if you have a Joomla site and you are using it’s default administrative login “Admin”, you may want to update the login name, and provide it a more complex password just in case.

 

Getting started with Node.js

On August 9, 2012, in How-to, Technical, by Raymond

Ray PenalosaNode.js is a new interpreter that allows JavaScript to be processed on the server side.  It is installed and enabled (as a beta service) on the DiscountASP.NET web servers.  To gain the full use of node.js, all you need to do is setup the Handler Mapping to route all .js file extensions to the Node.js interpreter.  This has to be done within the applications web.config file.

There are two ways to do this.  The easiest way is through IIS 7 Manager.  You can use this Knowledge Base article for details on how to connect to our web server with IIS 7 Manager.

Once connected, go to the “Handler Mapping” module.  Click on “Add Module Mapping.

Enter the settings as you see them.  Now you have your application set to push all *.js files to Node.js.

You do not have to use IIS 7 Manager to create this mapping.  You can directly code it within your application’s web.config file.  What you will need to look for is the ‘system.webserver’ element within the configuration of the web.config file and add the handler line:

<configuration>
  <system.webServer>
    <handlers>
    <add name="iisnode" path="*.js" verb="*" modules="iisnode" />
    </handlers>
  </system.webServer>
</configuration>

One of the things you will need to be aware of is that all *.js files will be processed by the node.js engine.  The side effect of that is some of the JavaScript on your pages may not function correctly because they are no longer being sent to the client’s browser for processing.  I tested an HTML5 page which uses a lot of JavaScript and some of the features in the HTML5 page did not work.

There are a couple of work-arounds to this:

1. Upload the .js file you want to be processed by node.js to its own subfolder.  Then add the web.config setting to that folder.

2. You can specify which .js file gets processed by the node.js engine.  In the path field, rather than implementing a wild card, you can set: path=”somefile.js”.

 

Ray PenalosaIt is often said that if you do not want your information to be stolen, don’t put it on the Internet.  However, the Internet has become an integral part of our lives, and we can’t help but post some kind of web site, blog, or forum.  Even if you don’t tell anyone about your web site, once it is published it will eventually be discovered.

How, you ask?  By robot indexing programs, A.K.A. bots, crawlers and spiders.  These little programs swarm out onto the Internet looking up every web site, caching and logging web site information in their databases.  Often created by search engines to help index pages, they roam the Internet freely crawling all web sites all the time.

Normally this is an acceptable part of the Internet, but some search engines are so aggressive that they can increase bandwidth consumption.  And some bots are malicious, stealing photos from web sites or harvesting email addresses so that they can be spammed.  The simplest way to block these bots is to create a simple robots.txt file that contains instructions to block the bots:

User-agent: *
Disallow: /

However, there are a couple of things wrong with this approach.  One is that bots can still hit the site, ignoring your robots.txt file and your wish not to be indexed.

But there is good news. If you are on an IIS 7 server, you have another alternative.  You can use the RequestFiltering Rule that is built-in to IIS 7.  It works on a higher level portion of the web service and it cannot be bypassed by a bot.

The setup is fairly simple, and the easiest and fastest way to initiate your ReqestFiltering Rule is to code it in your application’s web.config file.  The RequestFiltering element goes inside the <system.webServer><security> elements.  If you do not have this in your applications web.config file you should be able to create them.  Once that is created type this schema to setup your RequestFiltering rule.

<requestFiltering>
	<filteringRules>
		<filteringRule name="BlockSearchEngines" scanUrl="false" scanQueryString="false">
			<scanHeaders>
				<clear />
				<add requestHeader="User-Agent" />
			</scanHeaders>
			<appliesTo>
				<clear />
			</appliesTo>
			<denyStrings>
				<clear />
				<add string="YandexBot" />
			</denyStrings>
		</filteringRule>
	</filteringRules>
</requestFiltering>
<authentication>
	<basicAuthentication enabled="true" />
	<anonymousAuthentication enabled="true" />
</authentication>

You can name the filtering rule whatever you’d like and in the “requestHeader” element you will need to make sure you define “User-Agent.”  Within the “add string” element you’ll need to specify the User Agent name.  In this example I set it to YandexBot which blocks a search engine originating from Russia.  You can also block search engines such as Googlebot or Bingbot.

If you want to see if this rule is actually blocking these bots, you will need to download your HTTP raw logs from the server and parse them to look for the headers User-Agent.  If you scroll to the left and find the headers SC-Status (status code) you should see a 404 HTTP response.  In addition the headers will also carry sc-substatus which will be a substatus code to the primary HTTP response code.

Here is a list of potential substatus codes you may see when you impose your RequestFiltering rule.

 
iBlog by PageLines