The NamedNodeMap Interface

If for some reason you want all the attributes of an element or you don't know their names , you can use the getAttributes() method to retrieve a NamedNodeMap inherited from the Node . [3] The NamedNodeMap interface, summarized in Example 11.5, has methods to get and set the various named nodes as well as to iterate through the nodes as a list. Here it's used for attributes, but soon you'll see it used for notations and entities as well.

[3] Why getAttributes() is in Node instead of Element I have no idea. Elements are the only kind of node that can have attributes. For all other types of node, getAttributes() returns null.

Example 11.5 The NamedNodeMap Interface
 package org.w3c.dom; public interface NamedNodeMap {   // for iterating through the map as a list   public Node item(int index);   public int  getLength();   // For working with particular items in the list   public Node getNamedItem(String name);   public Node setNamedItem(Node arg) throws DOMException;   public Node removeNamedItem(String name)    throws DOMException;   public Node getNamedItemNS(String namespaceURI,    String localName);   public Node setNamedItemNS(Node arg) throws DOMException;   public Node removeNamedItemNS(String namespaceURI,    String localName) throws DOMException; } 

I'll demonstrate with an XLink spider program like the one in Chapter 6this time implementing the program on top of DOM rather than SAX. You can judge for yourself which one is more natural.

Recall that XLink is an attribute-based syntax for denoting connections between documents. The element that is the link has an xlink:type attribute with the value simple , and an xlink:href attribute whose value is the URL of the remote document. For example, the following book element points to this book's home page:

 <book xlink:type="simple"        xlink:href=""       xmlns:xlink="">   Processing XML with Java </book> 

The customary prefix xlink is bound to the namespace URI . It's usually advisable to depend on the URI and not the prefix, which may change.

Relative URLs are relative to the nearest ancestor xml:base attribute if one is present, or the location of the document otherwise . For example, the book element in this library element also points to .

 <library xml:base=""           xmlns:xlink="">   <book xlink:type="simple" xlink:href="books/xmljava/">     Processing XML with Java   </book> </library> 

The prefix xml is bound to the namespace URI . This is a special case, however. The xml prefix cannot be changed, and it does not need to be declared.

Attributes provide all of the information needed to process the link. Consequently, a spider can follow XLinks without knowing any details about the rest of the markup in the document. Example 11.6 is such a program. Currently this spider does nothing more than follow the links and print their URLs. It would not be hard to add code to load the discovered documents into a database or perform some other useful operation, however. You would simply subclass DOMSpider while overriding the process() method.

Example 11.6 An XLink Spider That Uses DOM
 import org.xml.sax.SAXException; import javax.xml.parsers.*; import*; import java.util.*; import*; import org.w3c.dom.*; public class DOMSpider {   public static String XLINK_NAMESPACE    = "";   // This will be used to read all the documents. We could use   // multiple parsers in parallel. However, it's a lot easier   // to work in a single thread, and doing so puts some real   // limits on how much bandwidth this program will eat.   private DocumentBuilder parser;   // Builds the parser   public DOMSpider() throws ParserConfigurationException {     try {       DocumentBuilderFactory factory        = DocumentBuilderFactory.newInstance();       factory.setNamespaceAware(true);       parser = factory.newDocumentBuilder();     }     catch (FactoryConfigurationError e) {       // I don't absolutely need to catch this, but I hate to       // throw an Error for no good reason.       throw new ParserConfigurationException(        "Could not locate a factory class");     }   }   // store the URLs already visited   private Vector visited = new Vector();   // Limit the amount of bandwidth this program uses   private int maxDepth = 5;   private int currentDepth = 0;   public void spider(String systemID) {     currentDepth++;     try {       if (currentDepth < maxDepth) {         Document document = parser.parse(systemID);         process(document, systemID);         Vector toBeVisited = new Vector();         // search the document for URIs,         // store them in vector, and print them         findLinks(document.getDocumentElement(),          toBeVisited, systemID);         Enumeration e = toBeVisited.elements();         while (e.hasMoreElements()) {           String uri = (String) e.nextElement();           visited.add(uri);           spider(uri);         }       }     }     catch (SAXException e) {       // Couldn't load the document,       // probably not well-formed XML, skip it     }     catch (IOException e) {       // Couldn't load the document,       // likely network failure, skip it     }     finally {       currentDepth--;       System.out.flush();     }   }   public void process(Document document, String uri) {     System.out.println(uri);   }   // Recursively descend the tree of one document   private void findLinks(Element element, List uris,    String base) {     // Check for an xml:base attribute     String baseAtt = element.getAttribute("xml:base");     if (!baseAtt.equals(""))  base = baseAtt;     // look for XLinks in this element     if (isSimpleLink(element)) {       String uri        = element.getAttributeNS(XLINK_NAMESPACE, "href");       if (!uri.equals("")) {         try {           String wholePage = absolutize(base, uri);           if (!visited.contains(wholePage)            && !uris.contains(wholePage)) {             uris.add(wholePage);           }         }         catch (MalformedURLException e) {           // If it's not a good URL, then we can't spider it           // anyway, so just drop it on the floor.         }       } // end if     } // end if     // process child elements recursively     NodeList children = element.getChildNodes();     for (int i = 0; i < children.getLength(); i++) {       Node node = children.item(i);       int type = node.getNodeType();       if (type == Node.ELEMENT_NODE) {         findLinks((Element) node, uris, base);       }     } // end for   }   // If you're willing to require Java 1.4, you can do better   // than this with the new class   private static String absolutize(String context, String uri)    throws MalformedURLException {     URL contextURL = new URL(context);     URL url = new URL(contextURL, uri);     // Remove fragment identifier if any     String wholePage = url.toExternalForm();     int fragmentSeparator = wholePage.indexOf('#');     if (fragmentSeparator != -1) {       // There is a fragment identifier       wholePage = wholePage.substring(0, fragmentSeparator);     }     return wholePage;   }   private static boolean isSimpleLink(Element element) {     String type      = element.getAttributeNS(XLINK_NAMESPACE, "type");     if (type.equals("simple")) return true;     return false;   }   public static void main(String[] args) {     if (args.length == 0) {       System.out.println("Usage: java DOMSpider topURL");       return;     }     // start parsing...     try {       DOMSpider spider = new DOMSpider();       spider.spider(args[0]);     }     catch (Exception e) {       System.err.println(e);       e.printStackTrace();     }   } // end main } // end DOMSpider 

There are two levels of recursion here. The spider() method recursively spiders documents. The findLinks() method recursively searches through the elements in a document looking for XLinks. It adds the URLs found in these links to a list of unvisited pages. As each of these documents is finished, the next document is retrieved from the list and processed in turn . If it's an XML document, then it is parsed and passed to the process() method. Non-XML documents found at the end of XLinks are ignored.

I tested this program by pointing it at the Resource Directory Description Language (RDDL) [] specification, which is one of the few real-world documents I know of that uses XLinks. I was surprised to find out just how much XLinked XML there is out there, although as of yet most of it is simply more XML specifications. This must be what the Web felt like circa 1991. Here's a sample of the more interesting output:

 D:\books\XMLJAVA>  java DOMSpider ...  VeniceBridgeContent40.xsd 

Processing XML with Java. A Guide to SAX, DOM, JDOM, JAXP, and TrAX
Processing XML with Javaв„ў: A Guide to SAX, DOM, JDOM, JAXP, and TrAX
ISBN: 0201771861
EAN: 2147483647
Year: 2001
Pages: 191

Similar book on Amazon © 2008-2017.
If you may any questions please contact us: