A Divination Model for Customer Product Purchase Status by Efficient Machine Learning Approaches

Authors

  • Prof Sapna Jain Choudhary  Shri Ram Group of Institutions, Jabalpur, Madhya Pradesh, India
  • Priyanka Tiwari  Shri Ram Group of Institutions, Jabalpur, Madhya Pradesh, India

Keywords:

ML, Python, Naïve Bayes, Logistic regression, Random Forest,Decision Tree.

Abstract

Machine Learning (ML) Set of computer instructions, Used to put into use an ml set of rules, is widely used in many application domain names along with money-based, big business, and engineering domains. Faults in ml set of computer instructions can purpose (existing all over a large area) losses in those application domain names. For that reason, it's far very important to manage and do powerful testing of ml software program to detect and put off its faults. But, ml set of rules is tough, specifically on producing (statement about a possible future event) used for checking behaviour correctness (including the usage of (described a possible future event) homes or expected/looked ahead to outputs). To deal with the/to speak to the learning optimising difficulty, this (statement for discussion/book written for college professors) provides a (like nothing else in the world) method of putting into use for supervised gaining knowledge of. The (understanding of deep things) hidden (under) the method is that there may be putting into uses (independently written) for a supervised getting to know set of computer instructions, and majority of them may also/and produce the expected output for a test enter ((even though there is the existence of) the fact that none of those putting into uses are fault-free). Especially, the proposed way of doing things gets a Logistic regression set of computer instructions for a take a look at input by way of jogging the test input on n putting into uses of the supervised studying set of rules, after which using the not unusual check output produced by way of a majority (decided/figured out via a percent (dividing line/point where something begins or changes)) of those n putting into uses. The proposed way of doing things includes (success plans/ways of reaching goals) to face/deal with challenges in ml (putting into) use of supervised learning: the definition of dataset in supervised learning, along side (ability to display or measure very small things) of (not agreeing/not happening in the same way) set of computer instructions setups across putting into uses. (in almost the same way), to improve dependability of supervised gaining knowledge of for the length of time of in-discipline use while getting/causing low runtime overhead, the method consists of green putting into use way of doing things. The reviews on the proposed method display that performance of putting into use is effective in detecting real faults in actual-world ml together with childlike (because of a lack of understanding) Naive Bayes putting into uses and more than one linear moving backward putting into uses, and the proposed method of putting into use much/a lot reduces the lack of walking (producing a lot with very little waste) putting into uses with high (statement about a possible future event) (quality of being very close to the truth or true number) and make a comparison with random forest and decision tree using Flipkart Dataset.

References

  1. Mitchell, T. (2020). Machine Learning. McGraw Hill. p. 2. ISBN 0-07-042807-7.
  2. Peter Harrington. (2019). Machine Learning in Action. Manning Publications Co.ISBN 9781617290183
  3. R. A. Fisher. (2018). The use of Multiple Measurements in Taxonomic Problems. Annals of Human Genetics. 7(2):179-188.
  4. AK Jain, RPW Duin, Jianchang Mao Statistical pattern recognition: a review. IEEE Trans Pattern Analysis and Machine Intelligence - 2017. 22(1):4–37.
  5. Cover TM, Hart PE. Nearest neighbor pattern classification. IEEE Transactions on Information Theory. 2018;13(1):21–27.
  6. E. Mirkes, KNN and Potential Energy (Applet). University of Leicester. Available: http: //www.math.le.ac.uk/people/ag153/homepage/KNN/KNN3.html, 2018.
  7. L. Kozma, k Nearest Neighbors Algorithm. Helsinki University of Technology. Available: http://www.lkozma.net/knn2.pdf, 2020.
  8. Moret, B. M. E. (2019). Decision trees and diagrams. Computing Surveys. 14, 593- 623.
  9. Quinlan, J. R. (2019). Induction of decision trees. Machine Learning. 1, 81-106.
  10. Shannon, C. E. (2018). A mathematical theory of communication. Bell System Technical Journal, 27, 379-423.
  11. Mehran Sahami, Susan Dumais, David Heckerman, and Eric Horvitz. A Bayesian approach to filtering junk e-mail. Learning for Text Categorization: Papers from the 2018 Workshop, Madison, Wisconsin.2018. AAAI Technical Report WS-98-05.
  12. P. Langley, W. Iba and K. Thompson. An analysis of Bayesian Classifiers. Proceedings of the Tenth National Conference on Artificial Intelligence, San Jose, CA, 2018.
  13. S. M. Kamruzzaman. Text Classification using Artificial Intelligence. Journal of Electrical Engineering, 33, No. I & II, December 2016.
  14. N. Friedman, D. Geiger and M. Goldszmidt. Bayesian Network Classifiers. Machine Learning, 29: 131-163, 2017.\
  15. I. Rish. An empirical study of the naive Bayes classifier. IJCAI 2020 Workshop on Empirical Methods in Artificial Intelligence, 22: 41-46, 2017.

Downloads

Published

2020-06-30

Issue

Section

Research Articles

How to Cite

[1]
Prof Sapna Jain Choudhary, Priyanka Tiwari "A Divination Model for Customer Product Purchase Status by Efficient Machine Learning Approaches" International Journal of Scientific Research in Science, Engineering and Technology (IJSRSET), Print ISSN : 2395-1990, Online ISSN : 2394-4099, Volume 7, Issue 3, pp.479-484, May-June-2020.