REFERENCES

data mining: opportunities and challenges
Chapter II - Control of Inductive Bias in Supervised Learning Using Evolutionary Computation A Wrapper-Based Approach
Data Mining: Opportunities and Challenges
by John Wang (ed) 
Idea Group Publishing 2003
Brought to you by Team-Fly

Benjamin, D. P. (ed.) (1990). Change of representation and inductive bias. Norwell, MA: Kluwer Academic Publishers.

Bogart, K. P. (1990). Introductory combinatorics, 2nd Ed. Orlando, FL: Harcourt.

Booker, L. B., Goldberg, D. E., & Holland, J. H. (1989). Classifier systems and genetic algorithms. Artificial Intelligence, 40, 235 282.

Box, G. E. P., Jenkins, G. M., & Reinsel, G. C. (1994). Time series analysis, forecasting, and control (3rd ed.). San Francisco, CA: Holden-Day.

Breiman, L. (1996) Bagging predictors. Machine Learning, 24, 123 140.

Brooks, F. P. (1995). The mythical-man month, Anniversary edition: Essays on software engineering. Reading, MA: AddisonWesley.

Cantu-Paz, E. (1999). Designing efficient and accurate parallel genetic algorithms. Ph.D. thesis, University of Illinois at Urbana-Champaign. Technical report, Illinois Genetic Algorithms Laboratory (IlliGAL).

Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stern, C. (2001). Introduction to algorithms, 2nd edition. Cambridge, MA: MIT Press.

Cover, T. M. & Thomas, J. A. (1991). Elements of iInformation theory. New York: John Wiley & Sons.

DeJong, K. A., Spears, W. M., & Gordon, D. F. (1993). Using genetic algorithms for concept learning. Machine Learning, 13, 161 188.

Donoho, S. K. (1996). Knowledge-guided constructive induction. Ph.D. thesis, Department of Computer Science, University of Illinois at Urbana-Champaign.

Duda, R. O., Hart, P. E., & Stork, D. (2000). Pattern classification, 2nd ed. New York: John Wiley & Sons.

Engels, R., Verdenius, F., & Aha, D. (1998). Proceedings of the 1998 Joint AAAI-ICML Workshop on the Methodology of Applying Machine Learning (Technical Report WS-98-16). Menlo Park, CA: AAAI Press.

Freund, Y. & Schapire, R. E. (1996). Experiments with a new boosting Algorithm. In Proceedings of the 13th International Conference on Machine Learning, pp. 148 156. San Mateo, CA: Morgan Kaufmann.

Fu, L.-M. & Buchanan, B. G. (1985). Learning intermediate concepts in constructing a hierarchical knowledge base. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-85), pp. 659 666, Los Angeles, CA.

Gaba, D. M., Fish, K. J., & Howard, S. K. (1994). Crisis management in anesthesiology. New York: Churchill Livingstone.

Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemna. Neural Computation, 4, 1 58.

Gershenfeld, N. A. & Weigend, A. S. (eds). (1994). The future of time series: Learning and understanding. In Time Series Prediction: Forecasting the Future and Understanding the Past (Santa Fe Institute Studies in the Sciences of Complexity XV). Reading, MA: AddisonWesley.

Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Reading, MA: AddisonWesley.

Goldberg, D. E. (1998). The race, The hurdle, and The sweet spot: Lessons from genetic algorithms for the automation of design innovation andcreativity. Technical report, Illinois Genetic Algorithms Laboratory (IlliGAL).

Grois, E., Hsu, W. H., Voloshin, M., & Wilkins, D. C. (1998). Bayesian network models for generation of crisis management training scenarios. In Proceedings of IAAI-98. Menlo Park, CA: AAAI Press, pp. 1113 1120.

Harik, G. & Lobo, F. (1997). A parameter-less genetic algorithm. Technical report, Illinois Genetic Algorithms Laboratory (IlliGAL).

Hassoun, M. H. (1995). Fundamentals of artificial neural networks. Cambridge, MA: MIT Press.

Hayes-Roth, B., Larsson, J. E., Brownston, L., Gaba, D., & Flanagan, B. (1996). Guardian Project home page, URL: http://www-ksl.stanford.edu/projects/guardian/.

Haykin, S. (1999). Neural networks: A comprehensive foundation, 2nd ed. Englewood Cliffs, NJ: Prentice Hall.

Heckerman, D. A. (1991). Probabilistic similarity networks. Cambridge, MA: MIT Press.

Heckerman, D. A. (1996). A tutorial on learning with Bayesian networks. Microsoft Research Technical Report 95 06, revised June 1996.

Hjorth, J. S. U. (1994). Computer intensive statistical methods: Validation, model selection and nootstrap. London: Chapman and Hall.

Horn, J. (1997). The nature of niching: Genetic algorithms and the evolution of optimal, cooperative populations. Ph.D. thesis, University of Illinois at Urbana-Champaign. Technical report, Illinois Genetic Algorithms Laboratory (IlliGAL).

Horvitz, E. & Barry, M. (1995). Display of information for time-critical decision making. In Proceedings of the 11th International Conference on Uncertainty in Artificial Intelligence (UAI-95). San Mateo, CA: Morgan-Kaufmann, pp. 296 305.

Hsu, W. H. (1998). Time series learning with probabilistic network composites. Ph.D. thesis, University of Illinois at Urbana-Champaign. Technical Report UIUC-DCS-R2063. URL: http://www.kddresearch.org/Publications/Theses/PhD/Hsu.

Hsu, W. H., Gettings, N. D., Lease, V. E., Pan, Y., & Wilkins, D. C. (1998). A new approach to multi-strategy learning from heterogeneous time series. In Proceedings of the International Workshop on Multi-strategy Learning, Milan, Italy, June.

Hsu, W. H., Ray, S. R., & Wilkins, D. C. (2000). A multi-strategy approach to classifier learning from time series. Machine Learning, 38, 213 236.

Hsu, W. H., Welge, M., Redman, T., & Clutter, D. (2002). Constructive induction wrappers in high-performance commercial data mining and decision support systems. Data Mining and Knowledge Discovery, 6(4): 361 391, October.

Jordan, M. I., & Jacobs, R. A. (1994). Hierarchical mixtures of wxperts and the EM algorithm. Neural Computation, 6, 181 214.

Kohavi, R. & John, G. H. (1997). Wrappers for feature subset selection. Artificial Intelligence, Special Issue on Relevance, 97(1-2), 273 324.

Kohavi, R., Sommerfield, D., & Dougherty, J. (1996). Data mining using MLC++: A machine learning library in C++. In Tools with Artificial Intelligence, p. 234 245. Rockville, MD: IEEE Computer Society Press URL: http://www.sgi.com/Technology/mlc.

Kohonen, T. (1990). The self-organizing map. In Proceedings of the IEEE, 78: 1464 1480.

Koza, J. R. (1992). Genetic programming. Cambridge, MA: MIT Press.

Lang, K. J., Waibel, A. H., & Hinton, G. E. (1990). A time-delay neural network architecture for isolated word recognition. Neural Networks, 3, 23 43.

Lee, K.-F. (1989). Automatic speech recognition: The development of the SPHINX system. Norwell, MA: Kluwer Academic Publishers.

Li, T., Fang, L. & Li, K. Q-Q. (1993). Hierarchical classification and vector quantization with neural trees. Neurocomputing, 5, 119 139.

Lowe, D. (1995). Radial basis function networks. In M. A. Arbib (Ed.), The handbook of brain theory and neural networks, 779 782. Cambridge, MA: MIT Press.

Mehrotra, K., Mohan, C. K., & Ranka, S. (1997). Elements of artificial neural networks. Cambridge, MA: MIT Press.

Michalski, R. S. (1993). A theory and methodology of inductive learning. Artificial Intelligence, 20(2), 111 161. Reprinted in B. G. Buchanan, & D. C. Wilkins (Eds.), Readings in knowledge acquisition and learning,. San Mateo, CA: MorganKaufmann.

Michalski, R. S., & Stepp, R. E. (1983). Learning from observation: Conceptual clustering. In R.S. Michalski, J.G. Carbonell, T. M. J& Mitchell (Eds.), Machine learning: An artificial intelligence approach. San Mateo, CA: Morgan Kaufmann.

Mitchell, T. M. (1997). Machine learning. New York: McGraw-Hill.

Mozer, M. C. (1994). Neural net architectures for temporal sequence processing. In A.S. Weigend. & N.A. Gershenfeld. (eds.), Time series prediction: Forecasting the future and understanding the past (Santa Fe Institute Studies in the Sciences of Complexity XV). Reading, MA: AddisonWesley.

Neal, R. M. (1996). Bayesian learning for neural networks. New York: Springer-Verlag.

Palmer, W. C. (1965). Meteorological drought. Research Paper Number 45, Office of Climatology, United States Weather Bureau.

Princip , J. & deVries, B. (1992). The Gamma model A new neural net model for temporal processing. Neural Networks, 5, 565 576.

Princip , J. & Lefebvre, C. (2001). NeuroSolutions v4.0, Gainesville, FL: NeuroDimension. URL: http://www.nd.com/.

Ray, S. R. & Hsu, W. H. (1998). Self-organized-expert modular network for classification of spatiotemporal sequences. Journal of Intelligent Data Analysis, 2(4).

Resnick, P. & Varian, H. R. (1997). Recommender systems. Communications of the ACM, 40(3): 56 58.

Russell, S. & Norvig, P. (1995). Artificial intelligence: A modern approach. Englewood Cliffs, NJ: Prentice Hall.

Sarle, W. S. (ed.) (2002). Neural network FAQ. Periodic posting to the Usenet newsgroup comp.ai.neural-nets, URL: ftp://ftp.sas.com/pub/neural/FAQ.html.

Schuurmans, D. (1997). A new metric-based approach to model selection. In Proceedings of the Fourteenth National Conference on Artificial Intelligence (AAAI-97), Providence, RI, 552 558. Menlo Park, CA: AAAI Press.

Stein, B. & Meredith, M. A. (1993). The merging of the senses. Cambridge, MA: MIT Press.

Stone, M. (1997). An asymptotic equivalence of choice of models by cross-validation and Akaike's criterion. Journal of the Royal Statistical Society, Series B, 39, 44 47.

Watanabe, S. (1985). Pattern recognition: Human and mechanical. New York: John Wiley and Sons.

Witten, I. H. & Frank, E. (2000). Data mining: Practical machine learning tools and techniques with Java implementations. San Mateo, CA: MorganKaufmann.

Wolpert, D. H. (1992). Stacked generalization. Neural Networks, 5, 241 259.

Brought to you by Team-Fly


Data Mining(c) Opportunities and Challenges
Data Mining: Opportunities and Challenges
ISBN: 1591400511
EAN: 2147483647
Year: 2003
Pages: 194
Authors: John Wang

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net