Kulvinder Kochar Kaur*1, Gautam Allahbadia2 and Mandeep Singh3
1,2Centre for Human Reproduction, India
3Department of Neurologist, India
*Corresponding author: Kulvinder Kochar Kaur, Centre for Human Reproduction, India
Submission: June 18, 2021;Published: July 26, 2021
ISSN: 2640-9666Volume4 Issue4
Classically new therapies have got generated for the population. The large scale evaluation of genome sequencing have demonstrated, marked genetic variation among persons. In cases of diseases stimulated by genetic processes like cancer, Genomic sequencing can aid in discovering full mutations which stimulate separate tumors. The capacity to record the genetic form of separate patients has resulted in the generation of idea of Precision Medicine (PM),that represents a modern technology stimulated type of individualized medicine. Precision medicine tries to tailor the most efficacious treatment for patients in a method that it suits the makeup of his or her genetic individuality. For obtaining more individualized medicine, PM escalatingly tries to bring about getting incorporation in addition to integration of results more than Genomics ,like epigenomics, metabolomics along with imaging. Escalatingly, the powerful utilization of, besides incorporating these methods in PM, needs the utilization of Artificial Intelligence(AI) in addition to Machine Learning .The modern aspect of PM that has got utilized early in some fields of medicine like cancer ,has initiated influencing their developing interest in Reproductive medicine. We have reviewed the basic idea as well as history of PM in addition to Artificial Intelligence(AI), emphasizing their escalating influence on Reproductive medicine, besides discussing some of the restrictions as well as problems that have got faced by these new modalities in medicine.
Keywords: Precision medicine; Artificial intelligence; Machine learning; Reproductive medicine
Abbreviations: PM: Precision Medicine; AI: Artificial Intelligence; SVM’s: Support Vector Machines; RF’s: Random Forests; CNN: Convolutional Neural Networks; AUC: Area Under the Curve; PGT-A: Preimplantation Genetic Testing for Aneuploidy; TE: Trophectoderm; ICSI: Intracytoplasmic Sperm Injection; HER: Electronic Health Records
Conventionally new treatments have got generated for the population. In the last few
decades a lot of advances have been brought about, like the capacity to genotype or sequence
the full genome, have documented significant genetic variability among individuals .
Technological akin to those can disclose mutations ,which patients with inherited conditions
get it directly from their parents. Diseases stimulated by genetic processes like cancer
,genomic sequencing can pave the path for exposure of all the mutations which lie behind the
induction of each tumor [2,3]. This escalated capacity to be able to expose the genetic built of
separate patients has resulted in the generation of Precision Medicine (PM) or medical care
that has been planned for maximizing the effectiveness or therapeutic advantage of specific
group of patients, specifically by utilization of genetic or molecular profiling. PM exposes how
individualized genetic makeup of persons is there along with the single molecular makeup is
separate in their disease as well as then tries to make utilization of this knowledge to be able
to match every person to the optimal possible therapy. This kind of matching is escalating probable in view of us having a lot of treatments accessible with a
greater insight with regards to the molecular changes correlated
with the highly probable reaction to the particular treatments. Like,
cancer patients possessing mutations influencing the NTRK Kinase
give a good reaction to the NTRK inhibitors . Besides possessing
greater efficacy, they demonstrate usually lesser toxicity in other
cells along with tissues in view of them not possessing the target
The capacity to question the circulating DNA ,rather than biopsy material might imply that genomic evaluation would be escalatingly in the reach of greater patients in the coming time . PM research tries to escalate our capacity to get insight along with anticipate the effectiveness of therapy ,knowing that knowledge that can get derived from the uniqueness of every person. Escalatingly, PM, goes beyond genomics as well as has applications in epigenetics, proteomics, in addition to metalobomics also. Additionally, quantification of environmental exposure , behaviors , along with immune system  can result in enhancement of PM. In case of genomics as well, the capacity to concentrate on each cell (single cell genomics) have exposed heterogeneity that had not been earlier observed . These data validate findings examined in patients, in which targeted treatments can be efficacious for a small duration of time, but patients generate resistance as cells might possess resistance mutations in a big cell population .
PM is getting escalated more as well as more by Artificial
Intelligence (AI). Machine learning represents a kind of AI ,where a
machine can learn in addition to redesign to conditions along with
training .Classically a training data set gets utilized for teaching a
computer program to link objects, like images which are detailed
with the utilization of a series of parameters, like colour, shape,
along with texture ,with particular labels/classes ,like cancer or
noncancerous. Once training achieved, the computer programme
known as classifier is utilized for labeling of objects. Machine
learning implicates two strategies i)supervised ii) non supervised
learning. In case of supervised learning, we are acquainted with the
labels or classes at the time of training. On the other hand ,in case
of non-supervised learning, like hierarchical clustering ,another
Machine learning algorithm gets utilized to invent structure within
the data, like the presence of classes.
Various Machine learning methods are existent that can get utilized for learning, the method to map objects into classes and produce anticipating models. Certain maximum utilized methods are logistic regression, random forests(RF’s), Naïve Bayer classifier, Support Vector Machines (SVM’s) in addition to artificial neural networks. Logistic regression is commonly used as a supervised statistical method for anticipative evaluation. It can offer the association among binary variables (like cancer or noncancer) in addition to one or greater independent variables(like genetics ,cigarette smoking). Logistic regression utilizes a sigmoid function for mapping the weighted sum of parameters utilized for description of an object toward the probability that the object be the property of a specific class . The learning of weights are done from the data (like for a lot of labelled objects) at the time of training with the utilization of statistical method for highest chances determination. Logistic regression is a commonly utilized Machine learning methods in view if i) its ease in install, possessing a little higher features that require to be tuned, ii)it aids in yielding results via intuition rather than rationale which aid in anticipative precision, that includes statistical evaluation in the manner they aid as P values, iii)it can get utilized for model crosstalk among the variables along with iv) it usually illustrates good precision on trying to map among classes as well as training data is not too complicated.
Random Forests(RF’s),unite objects to classes by generation of multiple decision trees from selection of randomized subsets of the training results. Every decision tree gets knowledge with regards to mapping objects to classes with the utilization of easy rules. Every decision tree avails the right for voting its class on every object, as well as sum of every vote gets utilized to determine the possibility that an object is the property of a specific class. Mostly RF’s are usually believed to be in the form of a band or orchestra method . The utilization of decision trees lets RF’s to record complicated crosstalk among variables, hence commonly aids robust classification demonstrations .Hence it is of convenience ,that RF’s possess the capability of handling separate in addition to continued parameters together ,within the same model, a condition that it commonly present in case of biomedical data sciences. Whereas the Naïve Bayes classifiers utilize the Bayes rule for determination of the percentage that an object is possessed by a specific class data given regarding an object . A crucial parameter of Naïve Bayes classifiers(reason it is considered naïve) is that every parameter is believed to be independent anticipator of the class.
For each parameter weight gets estimated from labeled data as a part and parcel of training. Naïve Bayes classifiers are in demand secondary to their intuitive ,that can get easily interpreted in addition to adaptive or portable structure that continues to be firmly embedded in the statistical field . Naïve Bayes classifiers utilizes Baye’s theorem in their decision rule ,hence possess great effectiveness in addition to they are markedly adaptable or flexible. Specifically, they possess utility in large data sets . Support Vector Machines (SVM’s) learn classification of objects by isolation of the hyperplanes which segregates objects in the most appropriate way from the classes that are getting evaluated , by nonlinear transformation of input data spaces possessing huge dimensions SVM’s have the capacity to learn to anticipate complicated class of objects .Whereas SVM’s are efficacious when the dimensions are greater than the amount of samples ,they might be slow in addition to lower applicability for data that is of larger scale. Artificial neural networks are the Machine learning methods which simulate the various methods by which brain works .
A neural network classically is made up of various layers of Artificial neurons that are totally joined with each other via edges ,with everyone correlated with a weight. Every neuron gets signals transmitted from a lot of neurons in the prior layer ,assembles those signals, as well as fires if the ensemble signals that have got integrated reach above a particular threshold.
Artificial neural networks mostly are made up of three layers;
II. Hidden along with
Deep neural networks expand the neural networks by escalation of the layer numbers along with the amount of neurons per layer . Greater, the number of layers as well as greater neuron numbers might imply models that possess more complex nature, hence deep neural networks can receive training for classification of complicated objects, like images or videos. This escalated complicated nature is gained secondary to escalated computations in addition to computating ability required for the training of deep neural networks. A commonly utilized kind of deep neural networks is the Convolutional Neural Networks(CNN),where groups of neurons next to each other from one layer feed the distinct neurons in the subsequent layer .thus getting out different parameter’s knowledge in addition to recording hierarchical patterns visualized in images . CNN’s represent very robust kinds of image classification along with object getting picked up. Hence, they are broadly utilized in biomedical imaging domains.
The advances occurring currently in computing power in addition to hardware fashioning , specifically the presence of graphic s processing units ,effectively these deep neural networks can be implemented. Various deep learning frameworks along with interfaces like Tensorflow Keras, as well as Py Torch validate deep learning applications. Tensorflow ,has been generated by Google, as well as represent an open-source platform. It aids generators to generate Machine Learning models in addition to neural networks. Keras represents a great level application programming affiliate of Tensorflow, that concentrate on modern deep neural networks. Py Torch, being a separate open-source library got generated by face book’s AI Research laboratory in addition to ,its Python attachment, that is further also quite liked for deep learning software generation.
Machine Learning classifiers get evaluated with the help of various methods. One constitutes the receiver operating properties curve ,that plots sensitivity (true positive rate)vis a vis specificity (true negative rate)at separate classifiers output thresholds. receiver operating properties curve, yield a visual graphical method for evaluation of a model’s working in addition to reason out the swap among the sensitivity as well as specificity .The area under the receiver operating properties curve is commonly utilized in the form of a total estimator of classifiers working. This Area Under the Curve(AUC)represents an intuitive method for evaluation of the classifier. The AUC estimates the output of the classifier, irrespective of which classification is utilized. The greater the AUC, the superior work is done by the classifier in anticipating. Maximum classifier yield the percentage of being possessed by a specific class. At a particular percentage threshold , sensitivity as well as specificity can further be documented. The precision ,which by definition is the number of total true anticipations that is divided by the full number of anticipations, is further utilized commonly. In case of binary classifications, the precision represents the percentage of true positive in addition to true negatives. Certain of these reports might be biased ,hence mislead ,in case of class being imbalanced ,pointing there are much greater positive in contrast to negatives or vice versa. Classifier working is required to be determined with the utilization of data which were not utilized for training .Commonly .a little bit of data gets withheld ,like 10-50%, while the rest of the data gets utilized. Then evaluation of the data that was withheld is done. In case of cross corroboration ,the withheld ser rotates via the data set as well as a lot of models get trained as well as evaluated. Tackled later utilization of same data set for both testing along with training suggests the similar biases are present in every subset. Hence, one or greater totally independent replication data sets are usually required for evaluation of objective working ‘’in the actual world’’. Once demonstrated to be precise along with anticipating, Machine Learning models can be utilized for evaluation of significance of parameters ,that is how much separate parameters aid in the precision of classifier.
Recent generation of deep neural networks has stimulated significant AI as well as Data Science as applied to Medicine. Like deep neural networks can be utilized for finding genetic variants  from large scale data sets with regards to genomics a as well as decipher the functional influence of germline awa somatic genetic variants . Deep learning in addition to other Machine Learning strategies might soon get utilized for anticipating reactions towards treatment dependent on genomics parameters . Deep neural networks have become responsible for escalation of PM beyond genomics particularly in bargaining medical images to be able to give more PM. A common as well as robust strategy is utilization of deep neural networks for image evaluation implicates utilization of a method known as transfer learning ,in which part of the networks get trained beforehand on significantly large image datasets like Image Net, with greater than 14 million images . At present, Image Net is believed to be a Standard dataset for pretrained networks. This prior training contributes in getting adapted as well as fine tuning on medical data that in relative terms is small, with suppose just 100 images in every class(in contrast to much greater data set needed for training from zero ).Images as well as videos are the ones where AI has brought maximum transformation in medicine. Like Application of AI to skin lesion images can anticipate if the lesion is cancerous or not , as well as Application of AI to retinal scans can anticipate Diabetic retinopathies along with other retinal diseases with great precision . On Application to pathology data(or tissue images). Utilization of AI can be done for differentiation of cancer sub kinds , anticipate if tumors possess some genetic changes , for diagnosis of diseases from radiation Images [24,25], as well as further detect polyps in colonoscopic videos . Utilization of AI to non-imaging data also illustrates advantage. Like, Deep learning has got applied for medical record data from thousands of patients from various centres as well as it demonstrated that this can anticipate the risk of readmission in 60days in other metrics with certainty .
Certain generation in genomics have resulted already in
the wider field of reproduction as well as fertility treatment
for utilization of the philosophy of PM. Like screening for the
carriers with regards to genetics can contribute in parents to be
able to choose the type of reproductive medicine they want, like
if PGT needs to be utilized for embryo selection which might not
be possessing particular mutations. In case of elder patients who
are going via IVF, Preimplantation Genetic Testing for aneuploidy
( PGT-A) can aid in embryo selection which have the maximum
probability of resulting in a successful pregnancy. PGT might
become lesser invasive or noninvasive, in case DNA from cultures
of embryos can get sequenced as well as demonstrate mirror DNA
from embryo cells [28-30]. Post implantation genetic testing gets
further utilized for checking for trisomies as well as other genetic
aberrations .Similarly noninvasive prenatal testing(liquid biopsy)
in case of pregnant ladies can pick up fetal DNA in the blood
for attempting to diagnose genetic aberrations, like aberrant
amount of chromosomes  as well as genetic variants that are
correlated with mendelian abnormalities . What is paradoxical
is in the genomic methods akin to that further detect earlier not
known complicated problems. Like, single cell DNA sequencing
has demonstrated that 80% of embryos possess mosaicism
AI has also started to influence reproductive medicine, resulting in ,more individualization . Like Deep learning can anticipate blastocyst quality depending on the static  or time lapse embryos images with greater precision in separate patients [36-40]. A CNN can further get training for detection of particular areas in the embryo like the inner cell mass ( ICM), in addition to Trophectoderm(TE), that can be inserted into an algorithm which evaluates quality of embryos [41,42]. The study conducted by Letterie along with Mcdonald in 2020 , tried to fashion a computer algorithm with regards to taking decisions varying from different days during performance of IVF. The algorithm received training with the utilization of 59706 points of data that were derived from 2603 cycles As, well as evaluated its working with 556 cycles for which the algorithm was totally naive or ignorant of them earlier. The variable utilized for input were the amounts of estradiol or E2, follicle diameters as estimated via ultrasonography, day of the periods , in addition to the amounts of recombinant FSH utilized . contrasting of the precision of algorithm with the decisions that had been taken by the reproductive physicians dealing with the cases.
Normally when we routinely do IVF USG as well as laboratory reports are assessed ,that is followed by making decisions like 1. Omit the COS or the COS to be carried on
2. Once there is a decision made of omitting the COS ,it is followed by either cancelling of the cycles secondary to very bad response or time for stimulation of ovulation ,but if decision is persisting with COS then right decision has been achieved
3. Once the patient is required to come back for monitoring
4. The amounts of hormone dosage that requires adjustment.
The precision of algorithm for these four decisions were 0.92 for halting or persisting with treatment, 0.96 for stimulation of OI or cancel cycle ,0.87 for total amount of days for follow up as well as 0.82 for dosage readjustment of rec FSH. The way demonstrated, AI working was markedly precise for the initial 2 choices made in contrast to the endocrinologist, nevertheless, it did not work that good ,in finding the amount of days for follow up, despite a little better in finding the amount of days for follow up as well as dosage readjustment of rec FSH. This was the flaw as per Letterie along with Mcdonald in 2020 .
Thus proving that AI approach is useful for choice making in day-day IVF, minimum in the form of additive tool that requires >exploration. As per Babayev E ‘s, interpretation, this study utilized choice tree Machine Learning algorithm, which is easy for interpretation, as well as utilization of this in ART with the use of neural networks as well as deep learning which can adapt in Realtime dependent on new situations as well as tackle absent values are relatively restricted. Finally, success of AI for selection of gametes along with anticipate IVF success is based on high quality training data. Escalated collaboration among reproductive intelligentia, bioinformatics ,computational scientists along with biostatisticians that can hold the power of big data would aid wider applications for AI in ART . As has been pointed maximizing selection of embryos might result in reduction of the chances of multiple pregnancies along with their correlated risks. Further utilization of deep learning can be done for evaluation of sperm quality , as well as thus aid in maximization of Intracytoplasmic Sperm Injection( ICSI) injection [45,46].
Sperm selection methods with utilization of AI that are associated with fertilization capacity in addition to successful IVF are restricted. Like computer assisted sperm evaluation vector machine model has got utilized for Classification of human sperm into 5 kinetic classes .Prior methods used AI that evaluated capacitation as a pointer of fertilization capacity, that include Cap- Score(Androvia, Mountainside, NJ), as well as penetration assays ,that finally yields sperm which are not fit for treatment as they would be destroyed or get wasted. Recently, demonstrated that human sperm intra cellular pH, estimated by flow cytometry in norm spermic men ,is correlated with the conventional IVF success in a single institution series [47-50]. A lot of other applications of AI in obstetrics and gynaecology research and Clinical work like better fetal heart rate monitoring at the time of pregnancy in addition to anticipating as well as picking up of preterm labour along with rest of pregnancy complications . It has been predicted that these revolutionary novel Applications of AI, like personalization of hormone therapy ,automated evaluation of the uterine lining as well as a lot of others would keep on development of reproductive Medicine more reliable as well as personalized, hence escalating the results with minimization of complications.
Different difficulties exist in trying utilization of AI in Medicine
. Maximum of these are in relation to Precision Medicine as well
as Artificial Intelligence ,the2 fields having generated maturity just
now .Like PM along with AI both have difficulties of standardization.
In case of genetic evaluation, we can derive outcomes from a wide
range of methods, from microarrays to targeted recording panels to
whole genomic sequencing. Restricted contrasting methods among
these platforms exist ,however, the ones that have got performed
illustrate certain amount of variation in these outcomes . The
reasoning offered for this is by differences of recording actions
among platforms Artificial Intelligence as well as separate variant
calling in addition to filtering thresholds .Akin to this in the AI field
,variety of methods along with software libraries for training AI
models ,with the least amount of large enough, data that is publicly
accessible sets to powerfully corroborate AI methods.
A commonly detailed restrain applicable to PM Stimulated by AI as well as genomics is the existence of biases within the data utilized for acquisition of novel medical or training of anticipated models .For reasoning with a simpler e.g., if a Machine Learning model receives training with the utilization of two classes of images data ,like good or bad images in addition to the amount of poor images are far greater than the good images, the AI anticipating each image as poor will get a good working. In a less simpler e.g. .data are accumulate from a particular demographic or from a specific kind of imaging data. The classifiers that received training on the data keep transferring the biases of the data along with yield biased anticipations on utilization prospectively . Genomics data which works as the key for PM have further been, illustrated to possess marked biases. The one that demonstrates the maximum visibility is the restricted ethnic existing in cohorts from where these datasets arrive . As a result of these biases is a PM which besides working better in certain population in contrast to others, might also result in misdiagnosis in those populations that have been underrepresented , hence potentially causing damage along with accelerating health disparities .
It is key that researchers wanting to acquire knowledge from large Clinical datasets in addition to get application of this insight to separate patients have a knowledge of the way for seeking such biases. They can either try to restrict those ,like on pooling a lot of data belonging to separate centres in addition to sources ,or to get insight of the knowledge they gained ,like the AI model they generated might not be pertinent for some datasets whose biases are separate from those of the training datasets. If it is PM that gets stimulated by AI, Genomics ,or both reproducing over cohorts as well as prospectively corroboration are further key when trying to get knowledge of prospective biases ,that take place when the output in reproducing cohorts are visualized as lesser in contrast to training cohorts. In an associated matter, the size in addition to emblematical nature of the training data sets represent the restriction in fields where the data get generated from patients. Certain patient-obtained data sets are publicly accessible ,like the Human sperm head morphology dataset (Hu SHeM) , however they have a tendency to be smaller. For safeguarding privacy laws like HIPAA laws ,the complicated nature of Electronic Health Records (HER) or Information as well as Technology (IT) systems in addition to the opposition or contesting medical centres see to it that sharing data becomes tough across centres as well as generation of large along with variable data sets that are sufficient for AI training. In some branches like cancer or radiology, certain of these restrictions have been got over, in addition to huge public data sets have got developed . Novel kinds of Machine Learning, like collaborative learning, might aid one day to tackle certain of these data sharing complications .
Further getting these implemented is another tough problem. The reason being AI software requires to get incorporated into the present clinically corroborated workflows which require recorroboration ,thus represents a limitation for getting it working .Since maximum AI is dependent on pattern cross matching along with does not mean any explanation is brought in, it does not cover borderline patients in addition to explanations, medically training received human beings. A strategy in incorporating AI as well as human crosstalk is through machine alliance ,that is getting extensively evaluated [61,62]. For diagnosing skin cancer, AI-dependent decision aid was demonstrated to enhance diagnostic precision with just AI or the clinician by themselves separately; furthermore, it seemed that physicians not having long time experience might find this advantageous with AI dependent aid in contrast to the ones who hold much more experience . Akin to that the diagnosis of pneumonia on chest radiograph got escalated when a group of radiologists in addition to Deep learning algorithms in combination, in contrast to radiologists reporting or deep learning alone .This kind of partnerships among humans along with AI might become more common in future. Like ,AI ,might be capable of making largely automated decisions in maximum subjects, but certain patients might require complicated rationale that can’t be acquired by AI at present .
PM, a type of medicine which tries to personalize therapy beyond the one practiced at present, has been provoking the interest of lot of researchers.AI in addition to data science 2 fields which have only currently got matured, would escalatingly bring a key part in broadening this research of PM. These advances would have to face a lot of hurdles. Here ,we tried to bring an overview of both Precision medicine as well as Artificial Intelligence . Specifically, we tried to give the basic concept in addition to the hurdles Artificial Intelligence restrictions in Artificial Intelligence which cover maximum methods by which AI has been applied till now. Artificial Intelligence has started influencing a lot of branches in medicine, and it is anticipated that it would influence reproductive medicine in a main manner. It is forethought that both precision medicine Artificial Intelligence Artificial Intelligence would have a significant role in the IVF Centres in the future, escalate the results, that would further decrease the pregnancy complications along with let couples to possess greater regulation of these reproductive events.
© 2021 Kulvinder Kochar Kaur. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.