TinyMCE, also known as the Tiny Moxiecode Content Editor, is a platform-independent web-based JavaScript/HTML WYSIWYG editor control, released as open source software under the LGPL by Moxiecode Systems AB. It has the ability to convert HTML textarea fields or other HTML elements to editor instances. TinyMCE is designed to easily integrate with content management systems.
TinyMCE integrates with many different open source systems, such as Mambo, Joomla!, Drupal, Plone, WordPress, b2evolution, e107, phpWebSite, Umbraco and Freedomeditor.
more details
Wednesday, July 28, 2010
Thursday, May 6, 2010
The Web Robots Pages
Web Robots (also known as Web Wanderers, Crawlers, or Spiders), are programs that traverse the Web automatically. Search engines such as Google use them to index the web content, spammers use them to scan for email addresses, and they have many other uses.
Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol.
It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds:
User-agent: *
Disallow: /
The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.
There are two important considerations when using /robots.txt:
* robots can ignore your /robots.txt. Especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention.
* the /robots.txt file is a publicly available file. Anyone can see what sections of your server you don't want robots to use.
On this site http://www.robotstxt.org/ you can learn more about web robots.
Web site owners use the /robots.txt file to give instructions about their site to web robots; this is called The Robots Exclusion Protocol.
It works likes this: a robot wants to vists a Web site URL, say http://www.example.com/welcome.html. Before it does so, it firsts checks for http://www.example.com/robots.txt, and finds:
User-agent: *
Disallow: /
The "User-agent: *" means this section applies to all robots. The "Disallow: /" tells the robot that it should not visit any pages on the site.
There are two important considerations when using /robots.txt:
* robots can ignore your /robots.txt. Especially malware robots that scan the web for security vulnerabilities, and email address harvesters used by spammers will pay no attention.
* the /robots.txt file is a publicly available file. Anyone can see what sections of your server you don't want robots to use.
On this site http://www.robotstxt.org/ you can learn more about web robots.
Saturday, April 10, 2010
Greasemonkey Add-ons for Firefox
Greasemonkey is a Firefox extension that allows you to write scripts that alter the web pages you visit. You can use it to make a web site more readable or more usable. You can fix rendering bugs that the site owner can't be bothered to fix themselves. You can alter pages so they work better with assistive technologies that speak a web page out loud or convert it to Braille. You can even automatically retrieve data from other sites to make two sites more interconnected.
step 1. Download extension: go to url,
step 2. to test greasemonkey working, Download User scripts go to url
step 3. open youtube.com and look for "download". Now you can download from youtube site.
OR
How to start: go to url
OR
How to write Greasemonkey scripts: go to url
step 1. Download extension: go to url,
step 2. to test greasemonkey working, Download User scripts go to url
step 3. open youtube.com and look for "download". Now you can download from youtube site.
OR
How to start: go to url
OR
How to write Greasemonkey scripts: go to url
Subscribe to:
Posts (Atom)