Section 12.5. Features of Module Import


12.5. Features of Module Import

12.5.1. Module "Executed" When Loaded

One effect of loading a module is that the imported module is "executed," that is, the top-level portion of the imported module is directly executed. This usually includes setting up of global variables as well as performing the class and function declarations. If there is a check for __name__ to do more on direct script invocation, that is executed, too.

Of course, this type of execution may or may not be the desired effect. If not, you will have to put as much code as possible into functions. Suffice it to say that good module programming style dictates that only function and/or class definitions should be at the top level of a module.

For more information see Section 14.1.1 and the Core Note contained therein.

A new feature was added to Python which allows you to execute an installed module as a script. (Sure, running your own script is easy [$ foo.py], but executing a module in the standard library or third party package is trickier.) You can read more about how to do this in Section 14.4.3.

12.5.2. Importing versus Loading

A module is loaded only once, regardless of the number of times it is imported. This prevents the module "execution" from happening over and over again if multiple imports occur. If your module imports the sys module, and so do five of the other modules you import, it would not be wise to load sys (or any other module) each time! So rest assured, loading happens only once, on first import.

12.5.3. Names Imported into Current
Namespace

Calling from-import brings the name into the current namespace, meaning that you do not use the attribute/dotted notation to access the module identifier. For example, to access a variable named var in module module that was imported with:

from module import var


we would use "var" by itself. There is no need to reference the module since you imported var into your namespace. It is also possible to import all the names from the module into the current namespace using the following from-import statement:

from module import *


Core Style: Restrict your use of "from module import *"

In practice, using from module import * is considered poor style because it "pollutes" the current namespace and has the potential of overriding names in the current namespace; however, it is extremely convenient if a module has many variables that are often accessed, or if the module has a very long name.

We recommend using this form in only two situations. The first is where the target module has many attributes that would make it inconvenient to type in the module name over and over again. Two prime examples of this are the Tkinter (Python/Tk) and NumPy (Numeric Python) modules, and perhaps the socket module. The other place where it is acceptable to use from module import * is within the interactive interpreter, to save on the amount of typing.


12.5.4. Names Imported into Importer's Scope

Another side effect of importing just names from modules is that those names are now part of the local namespace. A side effect is possibly hiding or overriding an existing object or built-in with the same name. Also, changes to the variable affect only the local copy and not the original in the imported module's namespace. In other words, the binding is now local rather than across namespaces.

Here we present the code to two modules: an importer, impter.py, and an importee, imptee.py. Currently, imptr.py uses the from-import statement, which creates only local bindings.

############# # imptee.py # ############# foo = 'abc' def show():     print 'foo from imptee:', foo ############# # impter.py # ############# from imptee import foo, show show() foo = 123 print 'foo from impter:', foo show()


Upon running the importer, we discover that the importee's view of its foo variable has not changed even though we modified it in the importer.

foo from imptee: abc foo from impter: 123 foo from imptee: abc


The only solution is to use import and fully qualified identifier names using the attribute/dotted notation.

############# # impter.py # ############# import imptee imptee.show() imptee.foo = 123 print 'foo from impter:', imptee.foo imptee.show()


Once we make the update and change our references accordingly, we now have achieved the desired effect.

foo from imptee: abc foo from impter: 123 foo from imptee: 123


12.5.5. Back to the __future__

Back in the days of Python 2.0, it was recognized that due to improvements, new features, and current feature enhancements, certain significant changes could not be implemented without affecting some existing functionality. To better prepare Python programmers for what was coming down the line, the __future__ directives were implemented.

By using the from-import statement and "importing" future functionality, users can get a taste of new features or feature changes enabling them to port their applications correctly by the time the feature becomes permanent. The syntax is:

from __future__ import new_feature


It does not make sense to import __future__ so that is disallowed. (Actually, it is allowed but does not do what you want it to do, which is enable all future features.) You have to import specific features explicitly. You can read more about __future__ directives in PEP 236.

12.5.6. Warning Framework

Similar to the __future__ directive, it is also necessary to warn users when a feature is about to be changed or deprecated so that they can take action based on the notice received. There are multiple pieces to this feature, so we will break it down into components.

The first piece is the application programmer's interface (API). Programmers have the ability to issue warnings from both Python programs (via the warnings module) as well as from C [via a call to PyErr_Warn()].

Another part of the framework is a new set of warning exception classes. Warning is subclassed directly from Exception and serves as the root of all warnings: UserWarning, DeprecationWarning, SyntaxWarning, and RuntimeWarning. These are described in further detail in Chapter 10.

The next component is the warnings filter. There are different warnings of different levels and severities, and somehow the number and type of warnings should be controllable. The warnings filter not only collects information about the warning, such as line number, cause of the warning, etc., but it also controls whether warnings are ignored, displayedthey can be custom-formattedor turned into errors (generating an exception).

Warnings have a default output to sys.stderr, but there are hooks to be able to change that, for example, to log it instead of displaying it to the end-user while running Python scripts subject to issued warnings. There is also an API to manipulate warning filters.

Finally, there are the command-line arguments that control the warning filters. These come in the form of options to the Python interpreter upon startup via the -W option. See the Python documentation or PEP 230 for the specific switches for your version of Python. The warning framework first appeared in Python 2.1.

12.5.7. Importing Modules from ZIP Files

In version 2.3, the feature that allows the import of modules contained inside ZIP archives was added to Python. If you add a .zip file containing Python modules (.py, .pyc, or .pyo files) to your search path, i.e., PYTHONPATH or sys.path, the importer will search that archive for the module as if the ZIP file was a directory.

If a ZIP file contains just a .py for any imported module, Python will not attempt to modify the archive by adding the corresponding .pyc file, meaning that if a ZIP archive does not contain a matching .pyc file, import speed should be expected to be slower than if they were present.

You are also allowed to add specific (sub)directories "under" a .zip file, i.e., /tmp/yolk.zip/lib/ would only import from the lib/ subdirectory within the yolk archive. Although this feature is specified in PEP 273, the actual implementation uses the import hooks provided by PEP 302.

12.5.8. "New" Import Hooks

The import of modules inside ZIP archives was "the first customer" of the new import hooks specified by PEP 302. Although we use the word "new," that is relative considering that it has been difficult to create custom importers because the only way to accomplish this before was to use the other modules that were either really old or didn't simplify writing importers. Another solution is to override __import__(), but that is not an easy thing to do because you have to pretty much (re)implement the entire import mechanism.

The new import hooks, introduced in Python 2.3, simplify it down to writing callable import classes, and getting them "registered" (or rather, "installed") with the Python interpreter via the sys module.

There are two classes that you need: a finder and a loader. An instance of these classes takes an argumentthe full name of any module or package. A finder instance will look for your module, and if it finds it, return a loader object. The finder can also take a path for finding subpackages. The loader is what eventually brings the module into memory, doing whatever it needs to do to make a real Python module object, which is eventually returned by the loader.

These instances are added to sys.path_hooks. The sys.path_importer_ cache just holds the instances so that path_hooks is traversed only once. Finally, sys.meta_path is a list of instances that should be traversed before looking at sys.path, for modules whose location you know and do not need to find. The meta-path already has the loader objects reader to execute for specific modules or packages.



Core Python Programming
Core Python Programming (2nd Edition)
ISBN: 0132269937
EAN: 2147483647
Year: 2004
Pages: 334
Authors: Wesley J Chun

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net