Feature selection, extraction and construction (original) (raw)

Abstract

Abstract Feature selection is a process that chooses a subset of features from the original features so that the feature space is optimally reduced according to a certain criterion. Feature extraction/construction is a process through which a set of new features is created. They are used either in isolation or in combination. All attempt to improve performance such as estimated accuracy, visualization and comprehensibility of learned knowledge. Basic approaches to these three are reviewed giving pointers to references for further studies.

FAQs

sparkles

AI

What insights does the intelligent multi-objective evolutionary algorithm provide for feature selection?add

The algorithm significantly outperforms traditional methods like GA, achieving 99.33% classification accuracy on datasets like iris and wine.

How does feature selection impact the predictive accuracy in data mining tasks?add

Feature selection enhances predictive accuracy and reduces computational load by eliminating irrelevant features, leading to up to 95% improvement in some models.

What role does granule definition play in the semantics of knowledge representation?add

Granules define distinct equivalence classes, enabling the formulation of meaningful rules and classifications in knowledge-based systems and improving data interpretations.

How are derived attributes generated and evaluated in data mining frameworks?add

Derived attributes are generated through transformations of existing features, evaluated based on their contribution to accuracy or other relevant performance metrics.

What challenges exist in sampling methods for data mining in various domains?add

Challenges include maintaining representative samples, handling skewed distributions, and ensuring accuracy while minimizing resource consumption in diverse data contexts.

Sorry, this document isn't available for viewing at this time.

In the meantime, you can download the document by clicking the 'Download' button above.

References (270)

  1. Kuncheva, L. I., Bezdek, J. C.: Nearest prototype classification clustering, genetic algorithms, or ran- dom search?. IEEE Trans. SMC-Part C: Application and Reviews, 28(1) (1998) 160-164
  2. Nakashima, T., Ishibuchi, H.: GA-Based Approaches for Finding the Minimum Reference Set for Nearest Neighbor Classification. In Proc. of IEEE Conf. on Computational Intelligence (1998) 709-714
  3. Yang M.-S., Chen C.-H.: On the edited fuzzy k- nearest neighbor Rule. IEEE Trans. on SMC-part B: Cybernetics 28(3) (1998) 461-466
  4. Bezdek, J. C. et al.: Multiple-prototype classifier de- sign. IEEE Trans. on SMC-Part C: Applications and Reviews. 28(1) (1998) 67-79
  5. Goldberg, D. E.: Genetic Algorithms in Search, Opti- mization and Machine Learning. Addison -Wesley Publishing Company (1989)
  6. Deb, K.: Multi-Objective Optimization Using Evolu- tionary Algorithms. John Wiley & Sons (2001)
  7. Schaffer J. D.: Multi-objective optimization with vec- tor evaluated genetic algorithms. In Proc. of 1st Int. Conference Genetic Algorithms (1985) 93-100
  8. Horn, J., Nafplotis N., Goldberg, D. E.: A niched Pareto genetic algorithm for multi-objective optimiza- tion. In Proc. of 1st IEEE Int. Conference of Evolu- tionary Computation (1994) 82-87
  9. Srinivas, N., Deb, K.: Multiobjective optimization us- ing non-dominated sorting in genetic algorithms. Evolutionary Computation 2(3) (1994) 221-248
  10. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: A comparative case study and the strengthen Pareto approach. IEEE Trans. on Evolu- tionary Computation, 3(4) (1999) 257-271
  11. Ho, S.-Y., Chang, X.-I.: An efficient generalized mul- tiobjective evolutionary algorithm. In GECCO-99: Proc. of the Genetic and Evolutionary Computation Conference (1999) 871-878
  12. Chen, J.-H., Ho, S.-Y.: Evolutionary multi-objective optimization of flexible manufacturing systems. In GECCO-2001: Proc. of the Genetic and Evolutionary Computation Conference (2001) 1260-1267
  13. Ho, S.-Y., Chen, Y.-C.: An efficient evolutionary al- gorithm for accurate polygonal approximation. Pattern Recognition 34 (2001) 121-133
  14. Dey, A.: Orthogonal Fractional Factorial Designs. New York, Wiley (1985)
  15. Hicks, C. R., Turner, K. V. Jr.: Fundamental Concepts in the Design of Experiments. 5 th edn. Oxford Univer- sity Press, New York (1999)
  16. Zhang, Q., Leung, Y.-W.: An orthogonal genetic al- gorithm with quantization for global numerical opti- mization. IEEE Trans. on Evolutionary Computation, 5(1) (2001) 41-53
  17. IICM 第五卷 第二期 民國九十一年六月 References
  18. P. W. Baim. A method for attribute selection in inductive learning system. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10(9):888- 896, 1988.
  19. J. C. Bezdek. Pattern Recognition with Fuzzy Objective Function Algorithms. Plenum Press, New York, 1981.
  20. X. Boyen and L. Wehenkel. Automatic induction of fuzzy decision trees and its application to power system security assessment. Fuzzy Sets and Systems, 102:3-19, 1999.
  21. L. Breiman, J. Friedman, R. Olshen, and C. Stone. Classification and Regression Trees. Chapman and Hall, London, 1984.
  22. W. Buntine. Learning classification trees. Statistics and Computing, 2:63- 73, 1992.
  23. R. G. Casey and G. Nagy. Decision tree design using a probabilistic model. IEEE Transactions on Information Theory, 30(1):93-99, 1984.
  24. I. Chiang and J. Hsu. Integration of fuzzy classifiers with decision trees. In Proceedings of Asian Fuzzy Systems Symposium, pages 65-78, Kenting, Taiwan, 1996.
  25. I. Chiang and J. Hsu. Fuzzy classification trees for data analysis. Fuzzy Sets and Systems, 2002.
  26. K. J. Cios and L. M. Sztandera. Continuous ID3 algorithm with fuzzy entropy measures. In Proceedings of the International Conference on Fuzzy Systems, pages 469-476, San Diego, CA, 1992.
  27. P. Clark and T. Niblett. The CN2 induction algorithm. Machine Learning, 3:261-283, 1989.
  28. R. Duda and P. Hart. Pattern Classification and Scene Analysis. John Wiley and Sons, New York, 1973.
  29. D. Heath, S. Kasif, and S. Salzberg. Learning oblique decision trees. In Proceedings of the Thirteenth International Joint Conference on Artificial In- telligence, pages 1002-1007, Chambery, France, 1993.
  30. J. Y. Hsu and I. Chiang. Fuzzy classification trees. In Proceedings of the Ninth International Symposium on Artificial Intelligence, pages 431-438, Cancun, Mexico, 1996.
  31. C. Z. Janickow. Fuzzy decision trees: Issues and methods. IEEE Trans. on System, Man, and Cybernetics B: Cybernetics, 28(1):1-14, 1998.
  32. R. Kerber. ChiMerge: Discretization of numeric attributes. In Proceedings of the Tenth National Conference on Artificial Intelligence, pages 123-128, San Jose, CA, 1992.
  33. A. De Luca and S. Termini. A definition of a nonprobabilistic entropy in the setting of fuzzy sets theory. Information and Control, 20:301-312, 1976.
  34. R. S. Michalski. Learning flexible concepts: Fundamental ideas and method based on two-tiered representation. In Y. Kodratoff and R. S. Michalski, editors, Machine Learning: An Artificial Intelligence Approach, volume III. Morgan Kaufmann, Los Altos, CA, 1990.
  35. J. Mingers. An empirical comparison of selection measures for decision-tree induction. Machine Learning, 3:319-342, 1989.
  36. P. M. Murphy and M. J. Pazzani. Exploring the decision forest: An em- pirical investigation of Occam's razor in decision tree induction. Journal of Artifical Intelligence Research, 1:257-275, 1994.
  37. S. K. Murthy. On Growing Better Decision Trees from Data. PhD disserta- tion, The Johns Hopkins University, Baltimore, Maryland, 1995.
  38. S. K. Murthy, S. Kasif, and S. Salzberg. A system for induction of oblique decision trees. Journal of Artificial Intelligence Research, 2:1-32, 1994.
  39. S. K. Murthy, S. Kasif, S. Salzberg, and R. Beigel. OC1: Randomized induction of oblique decision trees. In Proceedings of the Eleventh National Conference on Artificial Intelligence, pages 322-327, Washington, DC, 1993.
  40. M. Pazzani and D. Kibler. The utility of knowledge in inductive learning. Machine Learning, 9(1):57-94, 1991.
  41. W. Pedrycz and Z. A. Sosnowski. The design of decision trees in framework of granular data and their application to software quality models. Fuzzy Sets and Systems, 123:271-290, 2001.
  42. J. R. Quinlan. Induction of decision trees. Machine Learning, 1:81-106, 1986.
  43. J. R. Quinlan. Learning logical definitions from relations. Machine Learn- ing, 5:239-266, 1990.
  44. J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, Los Altos, CA, 1993.
  45. L. Rendell and H. Cho. Empirical learning as a function of concept char- acter. Machine Learning, 5(3):267-298, 1990.
  46. R. Rivest. Learning decision lists. Machine Learning, 2:229-246, 1987.
  47. D. Rumelhart, G. Hinton, and R. Williams. Learning internal represen- tations by error prpagation. In D. Rumelhart and J. McClelland, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cogni- tion. Volume 1: Foundations, volume 1, pages 318-362. MIT Press, Cam- bridge, MA, 1986.
  48. R. Rymon. An SE-tree based characterization of the induction problem. In Proceedings of the Tenth International Conference on Machine Learning, pages 268-275, Amherst, MA, 1993.
  49. J. Sauar, G. Hoff, and T. Hausken. Colonoscopic screening examination of relatives of patients with colorectal cancer. Scandinavian Journal of Gastroenterology, 27:667-672, 1992.
  50. J. Schuermann and W. Doster. A decision theoretic approach to hierarchi- cal classifier design. Pattern Recognition, 17(3):359-369, 1984.
  51. C. E. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27:379-423, 623-656, 1948.
  52. M. Shieh, I. Chiang, J. Wong, C. Huang, S. Huang, and C. Wang. Prevalence of colorectal polyps in Taiwan: 60cm-sigmoidoscopic find- ings. Biomedical Engineering-Application, Basis, Communication, 7(3):50- 55, 1995.
  53. E. H. Shortliffe. Computer programs to support clinical decision making. Journal of the American Medical Association, 258:61-66, 1987.
  54. A. Suárez and J. F. Lutsko. Globally optimal fuzzy decision trees for classification and regression. IEEE Trans. on Pattern Analysis and Machine Intelligence, 21(12):1297-1311, 1999.
  55. Y. Yuan and M. J. Shaw. Induction of fuzzy decision trees. Fuzzy Sets and Systems, 69:125-139, 1995. References
  56. G. Salton and M.J. McGill. Introduction to Modern Information Retrieval, 1983
  57. G.A. Miller, R. Beckwith, C. Fellbaum, D. Gross and K. Miller. Introduction to WordNet: an On-line Lexical Database. In WordNet: an Electronic Lexical Database, 1-19, 1998
  58. M. Mitra, C. Buckley, A. Singhal and C. Cardie. An Analysis of Statistical and Syntactic Phrases. In Proc. RIAO97, 200-214, 1997
  59. R. Richardson and A.F. Smeaton. Using WordNet in a Knowledge-based Approach to Information Retrieval. In Proc. 17 th BCS-IRSG, 1995
  60. M. Sussna. Text Retrieval using Inference in Semantic Matanetworks. PhD Thesis, University of California, San Diego, 1997
  61. E.M. Voorhees. Using WordNet to Disambiguate Word Sense for Text Retrieval. In Proc. 16 th ACM-SIGIR., 171-180, 1993
  62. J.B. Lovins. Development of a Stemming Algorithm. In Mechanical Translation and Computational Linguistics, 11(1-2), 11-31, 1968
  63. A.F. Smeaton and I. Quigley. Experiments on using Semantic Distances Between Words in Image Caption Retrieval. In 19 th Proc. ACM-SIGIR, 174-180, 1996
  64. D. Johnson, W.W. Chu, J.D. Dionisio, R.K. Taira and H. Kangarloo. Creating and Indexing Teaching Files from Free-text Patient Reports. In AMIA'99, 1999
  65. J.A. Goldman, W.W. Chu, D.S. Parker and R.M. Goldman. Term Domain Distribution Analysis: A Data Mining Tool for Text Databases. In 2001 IMIA Yearbook of Medical Informatics, 96-101, 2001
  66. N. Ide and J. Véronis. Word Sense Disambiguation: the State of the Art. In Computational Linguistics, 24(1), 1- 40, 1998
  67. W. Hersh, C. Buckley, T.J. Leone and D. Hickam. OHSUMED: an Interactive Retrieval Evaluation and New Large Test Collection for Research. In Proc. 22 nd ACM- SIGIR Conf., 191-197, 1994
  68. National Library of Medicine. UMLS Knowledge Sources, 12 th edition, 2001
  69. J. Lyons. Semantics, 1977
  70. A.V. Aho and M.J. Corasick. Efficient String Matching: an Aid to Bibliographic Search. In CACM, 18(6), 330- 340, 1975
  71. P. Blackburn. "Representation, reasoning, and rela- tional structures: a hybrid logic manifesto". Logic Journal of IGPL, 8(3):339-365, 2000.
  72. R. Fagin, J.Y. Halpern, Y. Moses, and M.Y. Vardi. Reasoning about Knowledge. MIT Press, 1996.
  73. T.F. Fan, W.C. Hu, and C.J. Liau. "Decision logics for knowledge representation in data mining". In Pro- ceedings of the 25th Annual International Computer Software and Applications Conference(COMPSAC), pages 626-631. IEEE Press, 2001.
  74. P. Hájek. "Logics for data mining (GUHA rediviva)". Neural Network World, 10:301-311, 2000.
  75. T.-s. Hsu, C.-J. Liau, and D.-W. Wang. A logical model for privacy protection. In Proceedings of the 4th International Conference on Information Security, LNCS 2200, pages 110-124. Springer-Verlag, 2001.
  76. A.J. Hundepool and L.C.R.J. Willenborg. "µ-and τ - ARGUS: Software for statistical disclosure control". In Proceedings of the 3rd International Seminar on Statistical Confidentiality, 1996.
  77. R. Kruse, C. Borgelt, and D. Nauck. "Fuzzy data anal- ysis: challenges and perspectives". In Proceedings of the 8th IEEE International Conference on Fuzzy Systems, pages 1211-1216, San Francisco, CA, 1999. IEEE.
  78. M. Krynicki, M. Mostowski, and L.W. Szczerba, ed- itors. Quantifiers: Logics, Models and Computation. Kluwer Academic Publishers, 1995.
  79. M. Kryszkiewicz and H. Rybiński. "Reducing infor- mation systems with uncertain attributes". In Z. W. Raś and M. Michalewicz, editors, Proceedings of the 9th ISMIS, LNAI 1079, pages 285-294. Springer- Verlag, 1996.
  80. W. Lee, S.J. Stolfo, and K.W. Mok. A data mining framework for building intrusion detection models. In Proceedings of the 1999 IEEE Symposium on Security and Privacy, pages 120-132, 1999.
  81. W. Lee and D. Xiang. Information-theoretic measures for anomaly detection. In Proceedings of the 2001 IEEE Symposium on Security and Privacy, pages 130- 143, 2001.
  82. C.J. Liau and D.R. Liu. "A logical approach to fuzzy data analysis". In J.M. Zytkow and J. Rauch, edi- tors, Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Dis- covery, LNAI 1704, pages 412-417. Springer-Verlag, 1999.
  83. C.J. Liau and D.R. Liu. "A Possibilistic decision logic with applications". Fundamenta Informaticae, 46(3), 2001.
  84. Z. Pawlak. Rough Sets-Theoretical Aspects of Rea- soning about Data. Kluwer Academic Publishers, 1991.
  85. P. Samarati. "Protecting respondents' identities in mi- crodata release". IEEE Transactions on Knowledge and Data Engineering, 13(6):1010-1027, 2001.
  86. P. Samarati and L. Sweeney. Protecting privacy when disclosing information: k-anonymity and its enforce- ment through generalization and suppression. Techni- cal report SRI-CSL-98-04, Computer Science Labora- tory, SRI International, 1998.
  87. L. Sweeney. "Guaranteeing anonymity when sharing medical data, the Datafly system". In Proceedings of American Medical Informatics Association, 1997.
  88. Y.Y. Yao and C.J. Liau. "A generalized decision logic language for granular computing". In Proceedings of the 11th IEEE International Conference on Fuzzy Sys- tems, page to appear. IEEE Press, 2002.
  89. Y.Y. Yao and Q. Liu. "A generalized decision logic in interval-set-valued information tables". In N. Zhong, A. Skowron, and S. Ohsuga, editors, New Directions in Rough Sets, Data Mining, and Granular-Soft Com- puting, LNAI 1711, pages 285-293. Springer-Verlag, 1999.
  90. Y.Y. Yao and N. Zhong. "An analysis of quantitative measures associated with rules". In Proceedings of the 2nd Pacific-Asia Conference on Knowledge Discovery and Data Mining, pages 479-488. IEEE Press, 1999.
  91. L.A. Zadeh. "Fuzzy logic = computing with words". IEEE Transactions on Fuzzy Systems, 4:103-111, 1996. References
  92. R. Agrawal, T. Imielinksi and A. Swami, "Mining association rules between sets of items in large database," The 1993 ACM SIGMOD Conference, Washington DC, USA, 1993.
  93. R. Agrawal, T. Imielinksi and A. Swami, "Database mining: a performance perspective," IEEE Transactions on Knowledge and Data Engineering, Vol. 5, No. 6, 1993, pp. 914-925.
  94. A. F. Blishun, "Fuzzy learning models in expert systems," Fuzzy Sets and Systems, Vol. 22, 1987, pp. 57-70.
  95. L. M. de Campos and S. Moral, "Learning rules for a fuzzy inference model," Fuzzy Sets and Systems, Vol. 59, 1993, pp. 247-257.
  96. R. L. P. Chang and T. Pavliddis, "Fuzzy decision tree algorithms," IEEE Transactions on Systems, Man and Cybernetics, Vol. 7, 1977, pp. 28-35.
  97. M. S. Chen, J. Han and P. S. Yu, "Data mining: an overview from a database perspective," IEEE Transactions on Knowledge and Data Engineering, Vol. 8, No. 6, 1996.
  98. M. Delgado and A. Gonzalez, "An inductive learning procedure to identify fuzzy systems," Fuzzy Sets and Systems, Vol. 55, 1993, pp. 121-132.
  99. T. Fukuda, Y. Morimoto, S. Morishita and T. Tokuyama, "Mining optimized association rules for numeric attributes," The ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, June 1996, pp. 182-191.
  100. J. Han and Y. Fu, "Discovery of multiple-level association rules from large database," The International Conference on Very Large Databases, 1995.
  101. T. P. Hong, C. S. Kuo and S. C. Chi, "A data mining algorithm for transaction data with quantitative values," Intelligent Data Analysis, Vol. 3, No. 5, 1999, pp. 363-376.
  102. T. P. Hong and J. B. Chen, "Finding relevant attributes and membership functions," Fuzzy Sets and Systems, Vol.103, No. 3, 1999, pp. 389-404.
  103. T. P. Hong and J. B. Chen, "Processing individual fuzzy attributes for fuzzy rule induction," Fuzzy Sets and Systems, Vol. 112, No. 1, 2000, pp. 127-140.
  104. T. P. Hong and C. Y. Lee, "Induction of fuzzy rules and membership functions from training examples," Fuzzy Sets and Systems, Vol. 84, 1996, pp. 33-47.
  105. T. P. Hong, K. Y. Lin and S. L. Wang, "Mining generalized association rules from quantitative data", The International Workshop on Intelligent Systems Resolutions in The Eighth Bellman Continuum, 2000, pp. 75-78.
  106. T. P. Hong and S. S. Tseng, "A generalized version space learning algorithm for noisy and uncertain data," IEEE Transactions on Knowledge and Data Engineering, Vol. 9, No. 2, 1997, pp. 336-340.
  107. A. Kandel, Fuzzy Expert Systems, CRC Press, Boca Raton, 1992, pp. 8-19.
  108. R. Srikant, Q. Vu and R. Agrawal, "Mining association rules with item constraints," The Third International Conference on Knowledge Discovery in Databases and Data Mining, Newport Beach, California, August 1997, pp.67-73.
  109. R. Srikant and R. Agrawal, "Mining quantitative association rules in large relational tables," The 1996 ACM SIGMOD International Conference on Management of Data, Monreal, Canada, June 1996, pp. 1-12.
  110. R. Srikant and R. Agrawal, "Mining Generalized Association Rules," The International Conference on Very Large Databases, 1995.
  111. C. H. Wang, J. F. Liu, T. P. Hong and S. S. Tseng, "A fuzzy inductive learning strategy for modular rules," Fuzzy Sets and Systems, Vol.103, No. 1, 1999, pp. 91-105.
  112. L. A. Zadeh, "Fuzzy sets," Information and Control, Vol. 8, No. 3, 1965, pp. 338-353.
  113. IICM 第五卷 第二期 民國九十一年六月 6. References
  114. C. L. Blake and C. J. Merz. UCI Repository of machine learning databases Irvine, CA: University of California, Department of Information and Computer Science, 1998. [http://www.ics.uci.edu/\~mlearn/ MLRepository.html]
  115. G. Cestnik, I. Konenenko and I. Bratko, "Assistant-86: A Knowledge-Elicitation Tool for Sophisticated Users," Progress in Machine Learning, pp. 31-45, 1987.
  116. H. M. Chen and S. Y. Ho, "Designing an Optimal Evolutionary Fuzzy Decision Tree for Data Mining," Proceedings of the Genetic and Evolutionary Computation Conference, pp. 943-950, 2001.
  117. P. Clark and T. Niblett, "Introduction in Noisy Domains," Progress in Machine Learning (from the Proceedings of the 2nd European Working Session on Learning), pp. 11-30, 1987.
  118. K. A. De Jong, W. M. Spears, D. F. Gordon, "Using Genetic Algorithms for Concept Learning," Machine Learning, vol. 13, no. 2, pp. 161-188, 1993.
  119. G. Demiroz, H. A. Govenir and N. Ilter, "Learning differential diagnosis of erythemato-squamous disease using voting feature," Artificial Intelligence in Medicine, v. 13, pp. 147-165, 1998.
  120. U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth, "From data mining to knowledge discovery: An overview," Advances in Knowledge Discovery and Data Mining, chap. 1, pp. 1-34, AAAI Press and MIT Press, 1996.
  121. M. V. Fidelis, H. S. Lopes and A. A. Freitas, "Discovering Comprehensible Classification Rules with a Genetic Algorithm," Proceedings of the 2000 Congress on Evolutionary Computations, pp. 805-810, 2000.
  122. J. H. Holland, Adaptation in Natural and Artificial Systems, Univ. of Michigan Press (Ann Arbor), 1975.
  123. J. H. Holland, "Escaping brittleness: the possibilities of general-purpose learning algorithms applied to parallel rule-based systems," Machine Learning, an artificial intelligence approach, 2, 1986.
  124. P. Horton and K. Nakai, "A Probablistic Classification System for Predicting the Cellular Localization Sites of Proteins," Intelligent Systems in Molecular Biology, pp. 109-115, 1996.
  125. W. W. Hsu and C. C. Hsu, "GEC: An Evolutionary Approach for Evolving Classifiers," to appear in Proceedings of the 6th Pacific-Asia Conference on Knowledge Discovery and Data Mining, Taipei, Taiwan, 2002.
  126. R. Kohavi, "Scaling Up the Accuracy of Naïve-Bayes Classifiers: a Decision-Tree Hybrid," Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pp. 202-207, 1996.
  127. C. H. Liu, C. C. Lu and W. P. Lee, "Document Categorization by Genetic Algorithms," IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 3868-3872, 2000.
  128. S. F. Smith, A Learning System Based on Genetic Adaptive Algorithms, PhD Thesis, Univ. of Pittsburgh, 1980. References
  129. S. Brin, R. Motwani, J.D. Ullman, and S. Tsur, "Dynamic Itemset Counting and Implication Rules for Market Basket Data", SIGMOD Record, Volume 6, Number 2: New York, June 1997, pp. 255-264.
  130. Tucker, A. Swift, and S. Liu, "Variable grouping in multivariate time series via correlation", Systems, Man and Cybernetics, Part B, IEEE Transactions on, Volume: 31 Issue: 2, April 2001, pp. 235-245.
  131. R. Agrawal, T. Imilienski, and A. Swami, "Data base Mining: A Performance Perspective", IEEE Transactions on Knowledge and Data Engineering, IEEE , December 1993., pp. 914-925.
  132. R. Agrawal, T. Imilienski, and A. Swami, "Mining Association Rules between Sets of Items in Large Databases", Proc. Of the ACM SIGMOD Int'l Conf. On Management of Data, May 1993, pp. 207-216.
  133. R. Agrawal, K. Lin, S. Sawhney, and K. Shim, "Fast similarity search in the presence of noise, scaling and translation in timeseries databases", In Proc. Of the Int'l Conf. On Very Large Data Bases (VLDB), 1995, pp. 490- 501.
  134. R. Srikant and R. Agrawal, "Mining generalized association rules", Proceedings of the 21st International Conference on Very Large Data Bases (VLDB'95), Zurich, Switzerland, 1995, pp. 407-419.
  135. J.S. Park, M.S. Chen, and P.S. Yu, "An effective hash- based algorithm for mining association rules", In Proc. 1995 ACM-SIGMOD, pp. 175-186.
  136. S. Brin, R. Motwani, and C. Silverstein, "Beyond market basket: Generalizing association rules to correlations", In Proc. 1997 SIGMOD, pp. 265-276.
  137. H. Toivonen, "Sampling large databases for association rules", Proc. Of the Int'l Conf. On Very Large Data Bases (VLDB), 1996, pp. 134-145.
  138. R. Agrawal and S. Srikant, "Fast Algorithms for Mining Association Rules in Large Databases", Proceedings of the 20 th International Conference on Very Large Data Bases, September 1994, pp. 487-499.
  139. M. Houtsma and A. Swami, "Set-Oriented Mining of Association Rules", Technical Report RJ 9567, IBM Almaden Research Laboratory, San Jose, CA, October 1993.
  140. M. L. Wong and K. S. Leung, Data mining using grammar based genetic programming and applications, Boston: Kluwer Academic, 2000. References
  141. B.D. Ripley. Pattern Recognition and Neural Networks. Cam- bridge University Press, Cambridge,UK, 1996.
  142. E. Forgy. Cluster analysis of multivariate data:efficiency vs. in- terpretablility of classifications. Biometrics, 21:768, 1965.
  143. J. MacQueen. On convergence of k-means and partitions with minimum average variance. Ann. Math. Statist., 36:1084, 1965.
  144. A. Blum and Shuchi Chawla. Combining labeled and unlabeled data with co-training. In The Eighteenth International Confer- ence on Machine Learning, 2001.
  145. S. Goldman and Y. Zhou. Enhancing supervised learning with unlabeled data. In Proceedings of the Seventeenth International Conference on Machine Learning, 2000.
  146. K. Nigam, A. McCallum, Sebastian Thrun, and Tom Mitchell. Text classification from labeled and unlabeled documents using em. Machine Learning, 34(1), 1999.
  147. T. S. Chiang and Y. Chow. Optimization approaches to semi- supervised learning. In M. C. Ferris, O. L. Mangasarian, and J. S. Pang, editors, Applications and Algorithms of Complementarity. Kluwer Academic Publishers, 2000.
  148. Ross Kindermann. Markov random fields and their applications. American Mathematical Society, Providence, R.I, 1980.
  149. M. Nadler and E. P. Smith. Pattern Recognition Engineering. Wiley-interscience, New York, 1993.
  150. REFERENCES
  151. R. Agrawal, T. Imielinski, and A. Swami, "Mining Association Rules Between Sets of Items in Large Databases," in Proceeding of ACM-SIGMOD international Conference on Management of Data, pp. 207-216, Washington, DC, June, 1993
  152. Herbert B, Enderton, A mathamtical Introduction to logic, Academic Press, 1972.
  153. T. T. Lee, "Algebraic Theory of Relational Databases," The Bell System Technical Journal Vol 62, No 10, December, 1983, pp.3159-3204
  154. T. Y. Lin "The Lattice Structure of Database and Mining Multiple Level Rules." Presented in the Workshop on Data Mining and E-organizations, COMPSAC 2001, Chicago, Oct 8-12, 2001. it will appear as "Feature Transformations and Structure of Attributes" In: Data Mining and Knowledge Discovery: Theory, Tools, and Technology IV, Proceeding of SPIE's aeroSence 2002 1-5 April 2002 Orlando, FL.
  155. T. Y. Lin, ``Data Mining and Machine Oriented Modeling: A Granular Computing Approach," Journal of Applied Intelligence, Kluwer, Vol. 13, No 2, September/October,2000, pp.113-124.
  156. T. Y. Lin, "Data Mining: Granular Computing Approach." In: Methodologies for Knowledge Discovery and Data Mining, Lecture Notes in Artificial Intelligence 1574, Third Pacific-Asia Conference, Beijing, April 26-28, 1999, 24-33.
  157. T. Y. Lin and J. Tremba, "Attribute Transformations on Numerical Databases," Lecture Notes in Artificial Intelligence 1805, Terano, Liu, Chen (eds), PAKDD2000, Kyoto, Japan, April 18-20, 2000, 181-192.
  158. H. Liu and H. Motoda, ``Feature Transformaion and Subset Selection,'' IEEE Intelligent Systems, Vol. 13, No. 2, March/April, pp.26-28 (1998)
  159. H. Liu and H. Motoda (eds), Feature Extraction, Construction and Selection -A Data Mining Perspective, Kluwer Academic Pubihsers (1998).
  160. E. Louie and T. Y. Lin, "Finding Association Rules using Fast Bit Computation: Machine-Oriented Modeling," in: Foundations of Intelligent Systems, Z. Ras and S. Ohsuga (eds), Lecture Notes in Artificial Intelligence #1932, Springer-Verlag, 2000, pp. 486-494. (12th International symposium on methodologies for Intelligent Systems, Charlotte, NC, Oct 11-14, 2000)
  161. Z. Pawlak, Rough sets. Theoretical Aspects of Reasoning about Data, Kluwer Academic Publishers, 1991
  162. Tsau Young Lin, more commonly T. Y. Lin, received his Ph. D from Yale University. Now he is a professor at the Department of Mathematics and Computer Science, San Jose State University. He has served as the (co-)chairs of conferences, and chief/associate editors, advisory, editorial /advisory/review board of journals. His interests include approximate retrievals, data mining, data security, intelligent control, Petri nets (as automata), and novel computing methodologies (fuzzy, granular and rough computing).
  163. References
  164. Q.Liu, S.H.Liu and F.Zheng, Rough Logic and Its Applications in Data Reduction, Journal of Software (in Chinese), Vol.12, No.3, 2001,3, 415-419.
  165. T.Y.Lin and Q.Liu, First-Order Rough Logic 1: Approximate Reasoning Via Rough Sets, Fundamenta Informaticae, Vol.27, No.2-3, Aug.1996, 137-154.
  166. A.Skowron, Toward Intelligent Systems: Calculi of Information Granules, Proceedings of International Workshop on Rough Set Theory and Granular Computing (RSTGC-2001)-Bulletin of International Rough Set Society Vol.5, No.1/2, May 20-22,2001,9-30.
  167. A. Skowron, J. Stepaniuk. and James. F. Peters, Extracting Patterns Using Information Granules, Proceedings of International Workshop on Rough Set Theory and Granular Computing (RSTGC-2001)-Bulletin of International Rough Set Society Vol.5, No.1/2, May 20-22,2001, 135-142.
  168. Q. Liu, Neighborhood Logic and Its Data Reasoning in Information Table of Neighborhood Values, Chinese Journal of Computers, Vol.24, No.4, 2001,4,405-410.
  169. Q. Liu, Rough Sets and Rough Reasoning (in Chinese), Published by Science Press, Beijing, 2001, 8. References
  170. H. Almuallim and T. Dietterich, \Learning with Many Irrelevant F eatures", Proc. of the Ninth National Conference o n A rti cial Intelligence, pp. 547{552, 1991
  171. H. Almuallim and T. Dietterich, \Learning Boolean Concepts in the Presence of Many Ir- relevant F eatures", Arti cial Intelligence 69, 1-2, pp. 279{305, 1994.
  172. A.L. Blum and P. Langley, \Selection of Relevant Features and Examples in Machine Learning", Arti cial Intelligence, 97, pp. 245{271, 1997.
  173. C. Cardie, \Using Decision Trees to Improve Case-Based Learning", Proc o f t h e T enth Interna- tional Conference on Machine Learning, pp. 25{ 32, 1993.
  174. M. Dash and H. Liu, \Feature Selection Meth- ods for Classi cations", Intelligent Data Anal- ysis: An International Journal, 1, 3, 1997. http://www-east.elsevier.com/ida/free.htm.
  175. M. Dash and H. Liu,"Feature Selection for Clus- tering", Proc. of the Fourth Paci c Asia Con- ference on Knowledge Discovery and Data Min- ing, (PAKDD-2000), Springer Verlag, pp. 110{ 121, 2000.
  176. J. G. Dy and C. E. Brodley, \ F eature Subset Se- lection and Order Identi cation for Unsupervised Learning. Proc. of the Seventeenth International Conference on Machine Learning, pp. 247{254, 2000.
  177. U. Fayyad, G.G. Grinstein, and A. Wierse, Infor- mation Visualization in Data Mining and Knowl- edge Discovery, Morgan Kaufmann Publishers, 2001.
  178. J. Gama and P. Brazdil, \Constructive Induction on Continuous Spaces", chapter 18, pp. 289{303. In 23], 1998. 2nd Printing, 2001.
  179. J. Gama, \Functional Trees", em Proc. of the Fourth International Conference on Discovery Science, pp. 58{73, 2001.
  180. J. Han and M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufman, 2001.
  181. D. Hand, H. Mannila, and P. S m yth, Principles of Data Mining, A Bradford Book The MIT press, 2001.
  182. A. Jain and D. Zongker, \Feature selection: Eval- uation, application, and small sample perfor- mance", IEEE Trans. on Pattern Analysis and Machine Intelligence 19, 2, pp. 153{158, 1997.
  183. Y. Kim, W. Street, and F. Menczer, \Feature Se- lection for Unsupervised Learning via Evolution- ary Search", Proc. of the sixth ACM SIGKDD International Conference on Knowledge Discov- ery and Data Mining, pp. 365{369, 2000.
  184. K. Kira and L. Rendell, \The feature selection problem: Traditional methods and a new algo- rithm", Proc. of the Tenth National Conference on Arti cial Intelligence, pp. 129{134, 1992.
  185. I. Kononenko, \Estimating attributes : Analy- sis and extension of RELIEF", Proceedings of the European Conference on Machine Learning, pp. 171{182, 1994.
  186. P. Langley, Elements of Machine Learning, Mor- gan Kaufmann, 1996.
  187. H. Liu, and R. Setiono, R, \A Probabilistic Ap- proach t o F eature Selection -A Filter Solution", Proc. of the International Conference on Machine Learning (ICML-96), pp. 319{327, 1996.
  188. H. Liu and R. Setiono, \Feature Selection and Classi cation -A Probabilistic Wrapper Ap- proach", Proc. of the Ninth International Confer- ence on Industrial and Engineering Applications of AI and ES, pp. 419{424, 1996.
  189. H. Liu and R. Setiono, \Feature Transformation and Multivariate Decision Tree Induction", Proc. of the First International Conference on Discov- ery Science (DS'98), Springer Verlag, pp. 279{ 290, 1998.
  190. H. Liu and H. Motoda, Feature S e l e ction for Knowledge Discovery Data Mining, Boston: Kluwer Academic Publishers, 1998.
  191. H. Liu, H. Motoda and M. Dash, \A Monotonic Measure for Optimal Feature Selection", Proc. of the European Conference on Machine Learning, pp. 101{106, 1998
  192. H. Liu and H. Motoda, editors, Feature Extrac- tion, Construction and Selection: A Data Mining Perspective, Boston: Kluwer Academic Publish- ers, 1998. 2nd Printing, 2001.
  193. H. Liu and H. Motoda, editors, Instance Selec- tion and Construction for Data Mining, Boston: Kluwer Academic Publishers, 2001.
  194. H. Liu, H. Lu, and J. Yao, \Toward Multi- database Mining: Identifying Relevant Databases", IEEE Transactions on Knowledge and Data Engineering, 13, 4, pp. 541{553, 2001.
  195. C.J. Matheus, \The Need for Constructive Induc- tion", Proc. of the Eighth International Workshop on Machine Learning, pp. 173{177, 1991.
  196. P. Narendra and K. Fukunaga, \A Branch a n d Bound Algorithm for Feature Subset Selection", IEEE Trans. on Computer, C-26, 9, pp. 917{922, 1977.
  197. M.J. Pazzani, \Constructive Induction of Carte- sian Product Attributes", chapter 21, pp. 341{ 354. In 23], 1998. 2nd Printing, 2001.
  198. R. Setiono and H. Liu, \Feature Extraction via Neural Networks", chapter 12, pp. 191{204. In 23], 1998. 2nd Printing, 2001.
  199. W. Siedlecki and J. Sklansky, \On Automatic Feature Selection", International Journal of Pat- tern Recognition and Arti cial Intelligence, 2 , pp. 197{220, 1988
  200. L. Talavera,"Feature Selection as A Preprocess- ing Step for Hierarchical Clustering", Proc. of the Sixteenth International Conference on Machine Learning, pp. 389{397, 1999.
  201. L. Talavera, \Feature Selection as Retrospective Pruning in Hierarchical Clustering", Proc. of the Third Symposium on Intelligent Data Analysis (IDA'99), pp. 75{86, 1999.
  202. H. Vafaie and K. De Jong, \Evolutionary Fea- ture Space Transformation", pp. 307{323. In 23], 1998. 2nd Printing, 2001.
  203. J. Wnek and R.S. Michalski, \Hypothesis- Driven Constructive Induction in AQ17-HCI: A Method and Experiments", Machine Learning, 14, pp. 139{168, 1994.
  204. N. Wyse, R. Dubes, and A.K. Jain, \A crit- ical evaluation of intrinsic dimensionality algo- rithms", In E.S. Gelsema and L.N. Kanal, edi- tors, Pattern Recognition in Practice, pp. 415{ 425. Morgan Kaufmann Publishers, Inc., 1980.
  205. Z. Zheng, \A Comparison of Constructing Dif- ferent T ypes of New Features for Decision Tree Learning", chapter 15, pp. 239 { 255. In 23], 1998. 2nd Printing, 2001.
  206. IICM 第五卷 第二期 民國九十一年六月 References
  207. Agrawal, R., Imielinski, T., Swami, A. (1993): Mining Association Rules between Sets of Items in Large Databases. Proc. ACM SIGMOD Conf. on Management of Data, 207-216.
  208. Chellas, B.F. (1980): Modal Logic: An Introduc- tion. Cambridge Univ. Press, Cambridge.
  209. Lin,T.Y. (1998), Granular Computing on Binary Relation I. L. Polkowski and A. Skowron (eds.), Rough Sets in Knowledge Discovery 1: Method- ology and Applications, Physica-Verlag, pp.107- 121.
  210. Lin,T.Y. (1998), Granular Computing on Bi- nary Relation II. L. Polkowski and A. Skowron (eds.), Rough Sets in Knowledge Discovery 2: Applications, Case Studies and Software Sys- tems, Physica-Verlag, pp.122-140.
  211. Murai, T., Miyakoshi, M., Shimbo, M. (1993): Measure-Based Semantics for Modal Logic.
  212. R.Lowen and M.Roubens (eds.), Fuzzy Logic: State of the Art, Kluwer, Dordrecht, 395-405.
  213. Murai, T., Miyakoshi, M., Shimbo, M. (1994): Soundness and Completeness Theorems Be- tween the Dempster-Shafer Theory and Logic of Belief. Proc. 3rd FUZZ-IEEE (WCCI), 855-858.
  214. Murai, T., Miyakoshi, M., Shimbo, M. (1995) A Logical Foundation of Graded Modal Operators Defined by Fuzzy Measures. Proc. 4th FUZZ- IEEE/2nd IFES, 151-156.
  215. Murai, T., Nakata, M., Sato, Y. (2001) A Note on Conditional Logic and Association Rules.
  216. T.Terano et al. (eds.), New Frontiers in Artifi- cial Intelligence, LNAI 2253, Springer, 390-394.
  217. Pawlak, Z. (1982): Rough Sets. Int. J. Computer and Information Sciences, 11, 341-356.
  218. Pawlak, Z. (1991): Rough Sets: Theoretical As- pects of Reasoning about Data. Kluwer, Dor- drecht.
  219. R. Agrawal, H. Mannila, R. Srikant, H. Toivo- nen, and A. I. Verkamo: Fast Discovery of As- sociation Rules, Advances in Knowledge Discovery and Data Mining, pp. 307-328, AAAI/MIT Press, Menlo Park, Calif. (1996).
  220. J. Dougherty, R. Kohavi, and M. Sahami: Super- vised and Unsupervised Discretization of Continu- ous Features, Proc. Twelfth Int'l Conf. on Machine Learning (ICML), pp. 194-202 (1995).
  221. U. M. Fayyad and K. B. Irani : Multi-Interval Discretization of Continuous-Valued Attributes for Classification Learning, Proc. Thirteenth Int'l Joint Conf. on Artificial Intelligence (IJCAI), pp. 1022-1027 (1993).
  222. W. Feller: An Introduction to Probability Theory and Its Applications, Wiley, New York (1957).
  223. D. D. Jensen and P. R. Cohen: "Multiple Compar- isons in Induction Algorithms", Machine Learning, Vol. 38, No. 3, pp. 309-338 (2000).
  224. M. J. Kearns and U. V. Vazirani: An Introduction to Computational Learning Theory. MIT Press, Cambridge, Mass. (1994).
  225. C. J. Merz and P. M. Murphy: UCI Repository of Machine Learning Databases, http:// www.ics.uci. edu/ ~mlearn/MLRepository. html , Univ. of Cal- ifornia, Dept. of Information and Computer Sci. (1994).
  226. J. R. Quinlan and R. Cameron-Jones: "Over- searching and Layered Search in Empirical Learn- ing", Proc. Fourteenth Int'l Joint Conf. on Artifi- cial Intelligence (IJCAI), pp. 1019-1024 (1995).
  227. S. Russel and P. Norvig: Artificial Intelligence, a Modern Approach, pp. 552-558, Prentice Hall, Up- per Saddle River, N. J. (1995).
  228. C. Schaffer: "Overfitting Avoidance as Bias", Machine Learning, Vol. 10, No. 2, pp. 153-178 (1993).
  229. P. Smyth and R. M. Goodman: "An Informa- tion Theoretic Approach to Rule Induction from Databases", IEEE Trans. Knowledge and Data Eng., Vol. 4, No. 4, pp. 301-316 (1992).
  230. E. Suzuki: "Autonomous Discovery of Reliable Exception Rules", Proc. Third Int'l Conf. on Knowledge Discovery and Data Mining (KDD), pp. 259-262 (1997).
  231. E. Suzuki: "Simultaneous Reliability Evaluation of Generality and Accuracy for Rule Discovery in Databases", Proc. Fourth Int'l Conf. on Knowl- edge Discovery and Data Mining (KDD), pp. 339- 343 (1998).
  232. E. Suzuki: "Worst-Case Analysis of Rule Dis- covery", Discovery Science, LNAI 2226 (DS), pp. 365-377, Springer (2001). (Erratum: http:// www.slab.dnj.ynu.ac.jp/erratumds2001.pdf.) References
  233. Bergadano, F., Matwin, S., Michalski, R.S. and Zhang, J. Learning Two-Tiered Descriptions of Flexible Concepts: The POSEIDON System, Machine Learning, 8, 5-43, 1992.
  234. Breiman, L., Freidman, J., Olshen, R. and Stone, C. Classi- fication And Regression Trees. Belmont, CA: Wadsworth In- ternational Group, 1984.
  235. Grzymala-Busse, J.W. LERS-A system for Learning From Examples based on Rough Sets, in: Slowinski, R.(ed) Intel- ligent Decision Support. Handbook of Application and Ad- vances of the Rough Set Theory, Kluwer Academic Publishers, 1992, Dordrecht, pp.3-18.
  236. Michalski, R.S. A Theory and Methodology of Machine Learn- ing.
  237. Michalski, R.S., Carbonell, J.G. and Mitchell, T.M., Ma- chine Learning -An Artificial Intelligence Approach, 83-134, Morgan Kaufmann, CA, 1983.
  238. Michalski, R.S., et al. The Multi-Purpose Incremental Learn- ing System AQ15 and its Testing Application to Three Med- ical Domains, Proc. of AAAI-86, 1041-1045, Morgan Kauf- mann, CA, 1986.
  239. Pawlak, Z. Rough Sets, Kluwer Academic Publishers, Dor- drecht, 1991.
  240. Quinlan JR: C4.5 -Programs for Machine Learning. Morgan Kaufmann, Palo Alto CA, 1993.
  241. Skowron, A. and Grzymala-Busse, J. From rough set theory to evidence theory. In: Yager, R., Fedrizzi, M. and Kacprzyk, J.(eds.) Advances in the Dempster-Shafer Theory of Evidence, pp.193-236, John Wiley & Sons, New York, 1994.
  242. Skowron, Rauser,C.M. The Discerniblity Matrix and Func- tions in Information Systems, in: Slowinski, R.(ed) Intelligent Decision Support. Handbook of Application and Advances of the Rough Set Theory, Kluwer Academic Publishers, 1992, Dordrecht, pp.331-362.
  243. Tsumoto, S. and Tanaka, H. A Common Framework of Em- pirical Learning Methods based on Rough Sets and Matroid Theory. Fundamentae Informaticae, pp. 273-288, 1996.
  244. Welsh, D.J.A. Matroid Theory, Academic Press, London, 1976.
  245. White, N.(ed.) Matroid Applications, Cambridge University Press, 1991.
  246. Whitney, H. On the abstract properties of linear dependence, Am. J. Math., 57, 509-533, 1935.
  247. Ziarko, W. The Discovery, Analysis, and Representation of Data Dependencies in Databases, in:Knowledge Discovery in Database, Morgan Kaufmann, 1991.
  248. Ziarko, W. Variable Precision Rough Set Model, Journal of Computer and System Sciences, 46, 39-59, 1993.
  249. Cochran, W. G., Sampling Techniques, 3 rd ed., John Wiley & Sons. New York, 1977.
  250. Curtiss, J. H., Lectures on the Theory of Industrial Sampling, New York University, Institute of mathematical Sciences, 1955.
  251. Deming, W. E. Some Theory of Sampling Theory, John Wiley & Sons. New York, 1950.
  252. Govindarajulu, Z., Elements of Sampling Theory & Methods, Prentice Hall, 1999.
  253. Jessen, R. J., Statistical Survey Techniques, John Wiley & Sons, New York, 1978.
  254. Neimark, E. D. & Estes, W. K., Stimulus Sampling Theory, Holden-Day, 1967.
  255. Pitard, F. F., PierreGy's Sampling Theory & Sampling Practice: V.1, CRC Press, 1989.
  256. Raj, Des, Sampling Theory, McGraw-Hill, 1968.
  257. Sukhatme Pandurang Vasudeo, Sampling Theory of surveys with Applications, Iowa State Univ. Press, 1984. Tsypkin, Y. Z., Sampling Systems Theory and its Applications, Macmillan, 1964.
  258. Wetherill, G. B., Sampling Inspection and Quality Control, 2 nd ed., Chapman and Hall, 1977.
  259. Yamanc, T., Elementary Sampling Theory, Prentic -Hall, 1967. References
  260. Demri, S. and Orlowska, E. Logical analysis of in- discernibility, in: Incomplete Information: Rough Set Analysis, Orlowska, E. (Ed.), Physica-Verlag, Heidel- berg, pp. 347-380, 1998.
  261. Fayyad, U.M. and Piatetsky-Shapiro, G. (Eds.) Ad- vances in Knowledge Discovery and Data Mining, AAAI Press, 1996.
  262. Mitchell, T.M. Generalization as search, Artificial In- telligence, 18, 203-226, 1982.
  263. Pawlak, Z. Rough Sets, Theoretical Aspects of Reason- ing about Data, Kluwer Academic Publishers, Dor- drecht, 1991.
  264. Quinlan, J.R. Learning efficient classification proce- dures and their application to chess end-games, in: Machine Learning: An Artificial Intelligence Ap- proach, Vol. 1, Michalski, J.S., Carbonell, J.G., and Mirchell, T.M. (Eds.), Morgan Kaufmann, Palo Alto, CA, pp. 463-482, 1983.
  265. Wille, R. Concept lattices and conceptual knowledge systems, Computers Mathematics with Applications, 23, 493-515, 1992.
  266. Yao, Y.Y. Granular computing: basic issues and possi- ble solutions, Proceedings of the 5th Joint Conference on Information Sciences, pp.186-189, 2000.
  267. Yao, Y.Y. On modeling data mining with granular computing, Proceedings of COMPSAC 2001, pp.638- 643, 2001.
  268. Yao, Y.Y. and Zhong, N. Potential applications of granular computing in knowledge discovery and data mining, Proceedings of World Multiconference on Systemics, Cybernetics and Informatics, pp.573-580, 1999.
  269. Zadeh, L.A. Towards a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic, Fuzzy Sets and Systems, 19, 111-127, 1997.
  270. IICM 第五卷 第二期 民國九十一年六月