sitescooper - download news from web sites and convert it automatically into one of several formats suitable for viewing on a Palm handheld.
sitescooper [options] [ [-site sitename] ...]
sitescooper [options] [-sites sitename ...]
sitescooper [options] [-name nm] [-levels n] [-storyurl regexp] | |
[-set sitefileparam value] url [...] |
Options: [-debug] [-refresh] [-fullrefresh] [-config file] [-install dir] [-instapp app] [-dump] [-dumpprc] [-nowrite] [-nodates] [-quiet] [-admin cmd] [-nolinkrewrite] [-stdout-to file] [-badcache] [-keep-tmps] [-fromcache] [-noheaders] [-nofooters] [-outputtemplate file.tmpl] [-grep] [-profile file.nhp] [-profiles file.nhp file2.nhp ...] [-filename template] [-prctitle template] [-parallel] [-disc] [-limit numkbytes] [-maxlinks numlinks] [-maxstories numstories] [-timeout numsecs]
[-text | -html | -mhtml | -doc | -plucker | -mplucker | -isilo | -misilo | -misilox | -richreader | -pipe fmt command] [-bw | -color] [-maxcolors n] [-cvtargs args_for_converter]
This script, in conjunction with its configuration file and its set of site files, will download news stories from several top news sites into text format and/or onto your Palm handheld (with the aid of the makedoc/MakeDocW or iSilo utilities).
Alternatively URLs can be supplied on the command line, in which case those URLs will be downloaded and converted using a reasonable set of default settings.
HTTP and local files, using the file:///
protocol, are both supported.
Multiple types of sites are supported:
1-level sites, where the text to be converted is all present on one page (such as Slashdot, Linux Weekly News, BluesNews, NTKnow, Ars Technica);
2-level sites, where the text to be converted is linked to from a Table of Contents page (such as Wired News, BBC News, and I, Cringely);
3-level sites, where the text to be converted is linked to from a Table of Contents page, which in turned is linked to from a list of issues page (such as PalmPower).
In addition sites that post news as items on one big page, such as Slashdot, Ars Technica, and BluesNews, are supported using diff.
Note that at this moment in time, the URLs-on-the-command-line invocation format does not support 2- or 3-level sites.
The script is portable to most UNIX variants that support perl, as well as the Win32 platform (tested with ActivePerl 5.00502 build 509).
sitescooper maintains a cache in its temporary directory; files are kept in this cache for a week at most. Ditto for the text output directory (set with TextSaveDir in the built-in configuration).
If a password is required for the site, and the current sitescooper session is interactive, the user will be prompted for the username and password. This authentication token will be saved for later use. This way a site that requires login can be set up as a .site -- just log in once, and your password is saved for future non-interactive runs.
Note however that the encryption used to hide the password in the sitescooper configuration is pretty transparent; I recommend that rather than using your own username and password to log in to passworded sites, a dedicated, sitescooper account is used instead.
-site ntk.site -site tbtf.site
-profile ntk.site -profile tbtf.site
page(s)
downloaded into DOC format, with all the articles
listed in full, one after the other.
page(s)
downloaded into plain text format, with all the
articles listed in full, one after the other.
page(s)
downloaded into HTML format, on one big page, with
a table of contents (taken from the site if possible), followed by all
the articles one after another.
page(s)
downloaded into HTML format, but retain the
multiple-page format. This will create the output in a directory
called site_name; in conjunction with the -dump argument,
it will output the path of this directory on standard output before
exiting.
page(s)
downloaded into Plucker format (see
http://plucker.gnu-designs.com/ ), on one big page. The page(s)
will be
displayed with a table of contents (taken from the site if possible), followed
by all the articles one after another.
page(s)
downloaded into iSilo format (see
http://www.isilo.com/ ), on one big page. This is the default. The
page(s)
will be displayed with a table of contents (taken from the site if
possible), followed by all the articles one after another.
page(s)
downloaded into iSilo format (see
http://www.isilo.com/ ), with one iSilo document per site, with each story
on a separate page. The iSilo document will have a table-of-contents
page, taken from the site if possible, with each article on a separate
page.
page(s)
downloaded into iSilo format (see http://www.isilo.com/ ),
with one iSilo document per site, with each story on a separate page. This
uses the iSiloXC converter. The iSilo document will have a table-of-contents
page, taken from the site if possible, with each article on a separate page.
page(s)
downloaded into RichReader format using HTML2Doc.exe
(see http://users.erols.com/arenakm/palm/RichReader.html ). The page(s)
will be displayed with a table of contents (taken from the site if
possible), followed by all the articles one after another.
page(s)
downloaded into an arbitrary format, using the command
provided. Sitescooper will still rewrite the page(s)
according to the
fmt argument, which should be one of:
The command argument can contain __SCOOPFILE__
, which will be replaced
with the filename of the file containing the rewritten pages in the above
format, __SYNCFILE__
, which will be replaced with a suitable filename
in the Palm synchronization folder, and __TITLE__
, which will be
replaced by the title of the file (generally a string containing the date
and site name).
Note that for the -mhtml switch, __SCOOPFILE__
will be replaced
with the name of the file containing the table-of-contents page. It's up
to the conversion utility to follow the href links to the other files
in that directory.
page(s)
downloaded directly to stdout in text or HTML format,
instead of writing them to files and converting each one. This option NO LONGER
implies -text, like it used to, so to dump text, use -dump -text.
page(s)
downloaded directly to stdout, in converted format as a PDB
file (note: not PRC format!), suitable for installation to a Palm handheld.
foobar.site http://www.foobar.com/ Foo Bar 1999_01_01_Foo_Bar
sitescooper.pl -admin import-cookies ~/.netscape/cookies
and on Windows:
perl sitescooper.pl -admin import-cookies ``C:\Program Files\Netscape\Users\Default\cookies.txt''
Unfortunately, MS Internet Explorer cookies are currently unsupported. If you wish to write a patch to support them, that'd be great.
The default filename template is YYYY_MM_DD_Site.
The default PDB title template is YYYY-Mon-DD: Site.
You can control exactly what HTML or text is written to the output file using the -outputtemplate argument. This argument takes the name of a file, which is read and parsed to provide replacement templates for sitescooper.
The file is read as a HTML- or XML-style tagged format; so for example the template for the main page in HTML format is read from between the <htmlmainpage> and </htmlmainpage> tags. The templates that can be defined are as follows:
A sample template file is provided in the file default_templates.html
; this
may have been installed in the sitescooper install directory,
/usr/share/sitescooper, or /usr/local/share/sitescooper. Note that the actual
templates used are not loaded from this file; instead they are incorporated
inside the sitescooper script, so changing this file will have no effect.
To install, edit the script and change the #! line. You may also need to (a) change the Pilot install dir if you plan to use the pilot installation functionality, and (b) edit the other parameters marked with CUSTOMISE in case they need to be customised for your site. They should be set to acceptable defaults (unless I forgot to comment out the proxy server lines I use ;).
sitescooper.pl http://www.ntk.net/
To snarf the ever-cutting NTKnow newsletter.
sitescooper.pl -refresh -html http://www.ntk.net/
To snarf NTKnow, ignoring any previously-read text, and producing HTML output.
sitescooper.pl -refresh -html -site site_samples/tech/ntk.site
To snarf NTKnow using the site file provided with the main distribution, producing HTML output.
sitescooper makes use of the $http_proxy
environment variable, if it
is set.
Justin Mason <jm /at/ jmason.org>
Copyright (C) 1999-2000 Justin Mason
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA, or read it on the web at http://www.gnu.org/copyleft/gpl.html .
The CPAN script category for this script is Web
. See
http://www.cpan.org/scripts/ .
File::Find
File::Copy
File::Path
FindBin
Carp
Cwd
URI::URL
LWP::UserAgent
HTTP::Request::Common
HTTP::Date
HTML::Entities
All these can be picked up from CPAN at http://www.cpan.org/ . Note that
HTML::Entities
is actually included in one of the previous packages, so
you do not need to install it separately.
Win32::TieRegistry
will be used, if running on a Win32 platform, to find the
Pilot Desktop software's installation directory. Algorithm::Diff
to support
diffing sites without running an external diff application (this is required
on Mac systems).
Sitescooper downloads news stories from the web and converts them to Palm handheld iSilo, DOC or text format for later reading on-the-move. Site files and full documentation can be found at http://sitescooper.org/ .