popup ad killer,ftp client,free download popup ad killer,ftp client,free download
popup ad killer,ftp client,free download
About k.soft Download - popup ad killer, ftp client, popupkiller, free downloads Contact k.soft
popup ad killer,ftp client,free download  
Features section

The ksoft Newsletter RSS Feed Subscribe to the
ksoft Newsletter
enter email address

< Back to Main

Google Adsense Testing Strange Keywords2/6/2006 @ 9:29pm

Is it just me or is Google testing a new algorithm with their Adsense program which is showing some very off-topic keywords?

Comments  Permalink

Google's Perception Losing the "Cool" Edge2/7/2006 @ 9:23am

Security, privacy, freedom of speech, copyrights, BMW.. what's happening to Google? News keeps popping up every day about controversies surrounding Google's empire. The blogosphere is teeming with rants about Google becoming evil.

It was not too long ago Microsoft was considered evil. Now, sentiment seems to be changing in their favor. Is Microsoft becoming "cool" again while Google's image slides?

This graph seems to think so:

Funny or real? Visit the site yourself entitled, "Can Corporations Be Evil", and decide.

Comments  Permalink

G-Zapper Updated to v1.12/12/2006 @ 10:22am

The downloads have been steadily increasing for G-Zapper. Apparently, many users are concerned about the privacy of their search terms when using the popular search engine Google.

The latest version of G-Zapper now reads the Google cookie installed on your PC to determine the date Google installed it, along with the number of days your searches have been tracked since last deleting the cookie.

G-Zapper decodes the following entries in the Google cookie:

PREF ID=9fa12af2d89fe936

Description: This is your unique Google ID. It is used to link your search terms to an IP address, effectively allowing Google to record your searches and link them to you.


Description: This is the encoded date Google installed the cookie on your PC.


Description: This is the encoded date you last saved preferences on Google's site.


Description: Unknown at this time.

Remaining cookie data is believed to be Google preferences. Still of curiosity is the expiration date of the cookie, which is set for 2038.

Comments  Permalink

MSN Search is Giving Away 1 Million.. Just to Search!2/15/2006 @ 9:02pm

Crazy times call for crazy ideas. Definitely creative, but will this one work? MSN has started an advertising campaign on Monday (Feb 13) to promote its search engine. They will be giving away up to 1 million dollars in a contest just for using their search engine.

I'm a bit confused if users have to visit the specific contest web site msnsearchandwin.com or if they just go to msn.com, but either way, this is an interesting marketing idea.

One problem up front though. How many times have you searched for something and come across a blinking image saying "You've just won!!"? Clicking the image just sends popups, spam, etc. Yeah, plenty of those out there. I'm afraid users may confuse MSN's legitiment contest message with similar web spam, unless the contest is strictly from within the msnsearchandwin.com site. Although, that would defeat the purpose of advertising their home search engine msn.com. Isn't msn.com where they really want users to go?

Comments  Permalink

NETCheck Released and PADexpress Updated2/17/2006 @ 1:54pm

Two new products specifically for software developers, only at ksoft:

NETCheck v1.0

Software developers using .NET need to be sure their users have the .NET framework installed or be faced with an ugly Windows error. NETCheck allows developers to verify .NET is installed and if not, prompt the user to download it. With .NET software on the rise, it's a must!

PADexpress v1.3

With just one click, software authors can submit their PAD files to over 100+ shareware and freeware sites. This software promotion tool crawls over 700 software directories, automatically submitting PAD files. Promoting your new software has never been easier.

Comments  Permalink

New Breed of Blog-Spam Bots Roaming the Internet3/12/2006 @ 3:38pm

I have recently noticed in increase in the number of spam robots hitting various sites, leaving a message eerily like a real person's.

The robots don't appear to be targeting any specific site, but are instead configured in a very generic way to process almost any type of web form, post a submission, and hope the web site includes a referer link. Of course, the referer link contains the real spam payload, providing a link to their web site and the hope of a higher Google ranking.

Here is an example of the message left by one of the robots:

Very interesting and beautiful site. It is a lot of helpful information. Thanks!

Try searching that exact phrase at Google to see how many web sites have been attacked:

Results: 23,900

The majority of the sites are blogs.

The problem with most blogs are that they leave referer links, trackbacks, and user comments completely open for automation. Here is the general algorithm the bots are using:

1. Perform a search for a target keyword in a popular search engine.

2. For each resulting web site, perform the following:

3. Crawl entire site looking for a submission form. Any form will do.

4. Parse every field in the form and create a submission string.

5. Insert payload (spam web site to promote) in particular field. Perhaps, a username link field, or web site link field, if one can be found.

6. Insert canned user response in comment field to appear to be a genuine user comment.

7. Generate automatic username and email address.

8. Submit using protocol specified in form (GET or POST).

9. Go to step 2 to repeat.

Looking at the algorithm above, which I believe they are using, notice the danger of step 3. The spam I have found has been on forms which do not even provide a referer link or any URL link for that matter. For example, a comment field which allows no room for a link.

The bots try a submission on any and all submission forms they come across. With the availability of networked PCs and zombie PCs, the spammers have access to a growing network from which they can launch these attacks, even on innocent submission forms.

The potential result of this? At a minimum, it will result in wasted bandwith for the target web sites. Worse, blog comments littered with seemigly real user comments. And even worse than that, junk data added to backend databases.

A solution to this problem lies not neccessarily in stopping the attacks, but rather in discouraging them from taking place to begin with.

Think about the benefits for the spammer: a link on thousands of blogs, a higher Google rank since their key algorithm relies on which sites link back to yours, easy access to blog comment screens with little or no security mechanism (capcha), and virtually no cost to perform the submissions.

With the growing abundance of blogs, it's no wonder people want to take advantage of this. But first, the major blog sites need to activate comment security across all of their blogs and turn it on by default. Search engine evolution may also play an important role in curbing this issue.

Comments  Permalink

Dumpster Diving for robots.txt Files3/18/2006 @ 10:09pm

You might be surprised what one can find these days, hiding within obscure web files, such as the robots.txt file.

Just for a brief intro, the robots.txt file is used by webmasters to tell the search engines which pages on their site should be ignored. As with most encoded files, the robots.txt file can include comments.

The geek in me found it interesting to hit a few popular sites for their robots.txt file, just to see what's there. Check this out:


They block all of their search engine colleagues from indexing their own search results. I think that is a little ironic. Although their list of robots is somewhat dated.


An entire blog hidden within the robots.txt file? It's like looking at an ezine from the dial-up days. Even more amazing, it appears updated daily. There is even an advertisement banner! We're talking about a robots.txt file here.


A long list of URLs. Some are more interesting than others. At least you can tell what they consider important enough to keep out of the search engines. This one stuck out though: /microsoft

What could that be? Last time I checked, those two were strict corporate enemies. Curious to see, I navigated over to the link. I am somewhat confused by the resulting page and even more confused by the title bar: Microsoft - Google Search

Isn't that a copyright violation?

Comments  Permalink

How Important is it for your RSS Feed to Validate?4/2/2006 @ 9:07am

There are beginning to be a large number of tools available for creating RSS feeds from scratch. Since RSS is a very specific file format, it is important that your RSS feed tools follow specification, else some directories may not list your feed and even worse, some users may not be able to view it.

While many of the RSS tools out there create perfectly compliant RSS feeds, others may use old rules and create an RSS feed which contains out-dated fields or even errors. This is true for both RSS and ATOM feeds.

You might have discovered the free RSS service FeedValidator, which lets you know if anything is wrong with your feed. But it can return some pretty confusing errors if anything is off.

Since RSS Submit includes a validation feature to help you check your feeds prior to submission, this is a key topic to elaborate on.

So the question arises, how important is it that your feed is 100% compliant?

Well, in all honesty, it really depends if you're a type A or type B personality. Us? Take a look.

Keep in mind that FeedValidator will check your feed against every single rule in the specification. If your feed passes, you can rest assured you are 100% compliant. However, not all RSS directories and feed readers need this. In fact, many will still display your feed just fine if it is only a matter of a few errors being returned.

Some example minor errors:

"item should contain a guid element"

Explanation: "It's recommended that you provide the guid, and if possible make it a permalink." However, this is a rather new field and optional. Most, if not all, directories and readers will accept your feed without the guid field.

"width must be less than X in size"

Explanation: "The image width must be less than size X" This is another field which is usually ignored by most readers.

"This feed uses an obsolete namespace"

Explanation: "This feed is using an obsolete namespace used by an obsolete snapshot of Atom, which is not supported by this validator." But the feed may be perfectly fine for the directories and readers, still this will show as an error in the validator.

"Unexpected version attribute on feed element "

Explanation: "Your feed contains elements with attributes that are not defined in the relevant specifications" However, the directories and readers may simply ignore this field.

"Unexpected mode attribute on title element"

Explanation: "Your feed contains elements with attributes that are not defined in the relevant specifications. " Again, this will likely be ignored by feed readers.

If your feed is showing some erros, but you insist it's a valid feed, you may want to read the full details on what FeedValidator considers an error and a warning.

What if you don't want to bother fiddling in XML and correcting the errors by hand?

One solution would be to use a different tool to create your feed - a tool which hopefully follows the standards better. For example, FeedForAll may help.

Another solution would be to send your feed through the free service FeedBurner. Assuming your feed is at least readable, they will provide you with a proxy feed URL, which points to a valid and compliant RSS feed, made from your original. Not to mention, they will also let you record traffic and add extras. But for making a compliant feed, they are also helpful. You would then submit the feed URL they provide to you.

In summary, I would always recommend making sure your feed is 100% compliant by checking it in FeedValidator.org or by clicking the little "Validate" button in RSS Submit. This will ensure maximum acceptance when submitting to the directories. Besides, it gives you a good feeling seeing the glowing message "This is a valid RSS feed!".

Comments  Permalink

What's a Memedigger?4/3/2006 @ 7:24pm

Ok, I know we're all getting tired of hearing "web 2.0". However, there are some cool things to come out of it and one of them, although still very young, are memediggers.

Memediggers are sites which rank content based on user votes. They usually involve news stories, but can also include web sites, rss feeds, or anything else.

The basic idea is that users submit news stories. They appear in a list with a very brief description. Other users get to vote on them and the vote count bumps the topic up or down in ranking. Those lucky enough to get bumped enough will make it to the front page. It doesn't matter if the original news story was spammy, since it would quickly get buried. Essentially, memediggers help filter out interesting content from the rest of the junk.

But how does this help you?

Well, if you have not noticed, the web is changing! Submiting your web site to the search engines is hardly enough these days. But there is a good deal of traffic to be had from making an appearance on one of these memedigger sites! Even if you're only listed in the "New Release" page for just a few hours, it's often enough to bring you a nice bump in traffic. If you're lucky, it'll stick.

Another note, it's relatively painless to submit your site to these servies. Most have a simple sign up form with an email validation. Don't worry about using a real email, with all the web 2.0 disposable email services out there, you can take your pick. Actually, this is an interesting topic of its own. I may do a blog about this next.

So where should you head to submit your stories? Here is my favorite list:

Digg - the biggest memedigger around. You'll get a big bump in traffic just from the first article posted, but will very likely be buried within a few hours.

Reddit - also popular, but less visitors. This site will keep your submission around a little longer since less people submit to it.

Shoutwire - Don't get too confused with their layout. The signup button is on the right-side of the screen, but on my monitor, it's almost hidden.

Boxxet - I really have no idea what this site is aiming for just yet as they are in beta. Still, it couldn't hurt to submit.

Newsbump - Just like reddit, this is another less popular memedigger, but will keep your article sticking around longer.

FeedButler - Sort of a memedigger that works with RSS feeds, although it looks more like an RSS directory. Submitting is easy enough.

NowPublic - This is another memedigger site where you can submit your article or news. They have a simple signup process with the usual email verification. They have a method to share news you have written and news that you are reading. The latter option is the one you probably want to submit to.

TailRank - While they do not require email verification to sign up, which does make submitting to them easier, I find their homepage somewhat confusing. TailRank says they find the hottest posts from thousands of blogs via incoming links and the text of the post. This sounds more like the method Google uses for pagerank, but nonetheless, it is worth submitting to.

So give it a try. If you have something interesting to tell people about, you'll go far with memediggers. What a weird name though, huh?

Comments  Permalink

Power Up Your Site With RSS Auto-Discovery4/11/2006 @ 7:08pm

You may have found this blog from a link on another web site, an email signature, or just word-of-mouth. Or was it the RSS auto-discovery tag?

Before we dive into this, let me explain exactly what an auto-discovery tag is.

It's a simple line of code you insert into the top of every web page across your site which indicates to users that your site has a feed. If the user is running RSS compatible software (which looks for this tag), then they will get a little notification mentioning that they can subscribe to your feed. It's like a big flag waving around saying "Hey! I use RSS! Subscribe and hear what I have to say tommorow".

What a great way to grow your web site traffic! But it gets even better.

I'm amazed at how many web sites out there publish RSS feeds and neglect to include an auto-discovery tag. Without this tag, most users will never even know that your site has a feed.

But wait a sec. Internet Explorer 6.0 ignores this tag. So do a lot of other web browsers and you may already include a little orange XML icon on the right-side/bottom/top of all your pages, linking to your feed. Why should you care then? Two words: Windows Vista

Internet Explorer 7, which is due out at the end of 2006 for businesses and early 2007 for the rest of us, will come packed with major - let me capitalize that one - Major support for RSS feeds. It will literly be integrated directly into the operating system. More than that, it will also have an RSS auto-discovery feature just like you see in Mozilla Firefox if you have the proper extension installed.

I'm sure you're convinced about the importance of having an RSS auto-discovery tag. So let me show you what you need to paste into your web pages.

Paste the following link into each page on your site where you wish for users to be notified that you have a feed. Of course, replace http://feeds.feedburner.com/ksoft with the URL to your own feed.

<link rel="alternate" type="application/rss+xml" title="RSS" href="http://feeds.feedburner.com/ksoft" />

If you want to see a real-world example, go ahead and click View-Source in your web browser to catch our tag.

Pretty simple, but powerful stuff.

Comments  Permalink

         ksoft Blog and Newsletter  ksoft RSS Feed  ksoft on Twitter  ksoft on FriendFeed