Skip to main content

Pelvic floor muscle contraction automatic evaluation algorithm for pelvic floor muscle training biofeedback using self-performed ultrasound

Abstract

Introduction

Non-invasive biofeedback of pelvic floor muscle training (PFMT) is required for continuous training in home care. Therefore, we considered self-performed ultrasound (US) in adult women with a handheld US device applied to the bladder. However, US images are difficult to read and require assistance when using US at home. In this study, we aimed to develop an algorithm for the automatic evaluation of pelvic floor muscle (PFM) contraction using self-performed bladder US videos to verify whether it is possible to automatically determine PFM contraction from US videos.

Methods

Women aged ≥ 20 years were recruited from the outpatient Urology and Gynecology departments of a general hospital or through snowball sampling. The researcher supported the participants in their self-performed bladder US and videos were obtained several times during PFMT. The US videos obtained were used to develop an automatic evaluation algorithm. Supervised machine learning was then performed using expert PFM contraction classifications as ground truth data. Time-series features were generated from the x- and y-coordinate values of the bladder area including the bladder base. The final model was evaluated for accuracy, area under the curve (AUC), recall, precision, and F1. The contribution of each feature variable to the classification ability of the model was estimated.

Results

The 1144 videos obtained from 56 participants were analyzed. We split the data into training and test sets with 7894 time series features. A light gradient boosting machine model (Light GBM) was selected, and the final model resulted in an accuracy of 0.73, AUC = 0.91, recall = 0.66, precision = 0.73, and F1 = 0.73. Movement of the y-coordinate of the bladder base was shown as the most important.

Conclusion

This study showed that automated classification of PFM contraction from self-performed US videos is possible with high accuracy.

Peer Review reports

Introduction

The International Continence Society defines urinary incontinence (UI) [1] as “the complaint of any involuntary leakage of urine.” UI is common in women, because pregnancy and childbirth are risk factors [2]. The estimates of UI prevalence in women vary between 25% and 45% in most studies [3]. The physical and psychological effects of UI lower a patient’s quality of life [4]. Therefore, adult women should prevent and improve their UI symptoms. Pelvic floor muscle training (PFMT) is recommended to prevent and improve stress, urgency, and mixed UI [5].

Because pelvic floor muscles (PFM) cannot visualized and struggle to contract properly [6], PFMT in combination with biofeedback is effective [7, 8]. Biofeedback involves visual or auditory feedback to gain control over involuntary bodily functions. Perineometry and electromyography [9, 10] are common biofeedback methods used in PFMT. They are invasive because they require the insertion and application of instruments into the vagina and anus. Currently, biofeedback methods are mainly performed in hospitals because they require specialized equipment and techniques. As PFMT needs to be continued mainly at home for continuous training [11], an easy and non-invasive biofeedback method available at home by themselves that can enable the patient to verify the correct PFM contraction in real time is needed.

Ultrasonography (US) used as a non-invasive biofeedback method for PFMT. US for PFMT biofeedback is available for the transperineal [12] and transabdominal [13] methods. Transabdominal US is less exposure of the patients than transperineal US and can measure bladder base elevation during PFMT to ensure PFM contraction [14]. Bladder base elevation confirmed by US correlates with perineometry, the gold standard measurement method for PFM contraction [15]. Furthermore, the current US device is a handheld device that works with a smartphone and can be fully utilized for personal use at home for PFMT biofeedback.

Recent studies have demonstrated the feasibility of self-performed US [16,17,18]. For example, self-performed endovaginal telemonitoring was used for reproductive evaluations during infertility treatment [17], and a study during COVID-19 revealed the feasibility of self-performed lung US in adults at home by teaching remotely [18]. Thus, women with UI may also be able to self-perform US.

However, there is a problem in applying this self-performed US to PFMT biofeedback: US images are difficult to interpret, because reading US images requires specialized knowledge. Therefore, assistance functions are needed to confirm the bladder base elevation. Recently, automatic image processing techniques have been used to analyze US images [19] and an automatic bladder area extraction function for US images has already been developed [20]. There is a need for a method that can automatically assess bladder base elevation due to PFM contraction. A recent study reported automatic evaluation of the bladder during PFM contraction using transperineal US [21], however, transperineal US is not suitable for self-performed US because it is difficult to perform.

Therefore, this study aimed to develop an automatic algorithm for PFM contraction using transverse bladder images obtained by self-performed US to test whether it is applicable to automatically identify PFM contraction from US videos.

Methods

Study design and settings

This study conducted a developmental study used US videos collected through a cross-sectional study. We recruited participants from the outpatient Urology and Gynecology departments at a general hospital in Tokyo or by snowball sampling. The study was conducted in multipurpose examination rooms in a hospital or laboratory at a university in Tokyo. Outpatients were collected in a multipurpose room at the hospital, and snowball sampling participants were collected in a laboratory at a university in Tokyo, while maintaining their privacy. One to two researchers accompanied them to collect the data. The study period was from March to November 2022.

This study was approved by the Research Ethics Committee of the Graduate School of Medicine, University of Tokyo (No. 2021256NI-[2]). Each participant read and gave written informed consent.

Participants

Women with and without UI were recruited to develop a widely usable algorithm for women. Women aged ≥ 20 years without UI were recruited through snowball sampling, and women aged ≥ 40 years with UI were recruited at outpatient Urology and Gynecology departments at a general hospital in Tokyo, or by snowball sampling. The Japanese version of the ICIQ-SF (International Consultation on Incontinence Questionnaire Short Form) [22], an international questionnaire, was used to confirm UI. Participants who answered “1” or more to Question 1 (How often do you leak urine? ) were considered to have UI. On the other hand, participants who answered “0” were considered to have no UI in this study. We excluded the following women: (1) those with low activities of daily living who were unable to apply US, such as lower activities of daily living as contractures or paralysis in their arms or fingers; (2) those diagnosed with dementia or indicated by a physician; (3) those with skin disease or tenderness in the lower abdomen; (4) women who had undergone cystectomy; (5) pregnant women; (6) women within four weeks of delivery; and (7) physician’s considered inappropriate for participation in the study.

At the hospital, study participants obtained permission from the physician for recruitment. After obtained the physician’s permission, recruitment and consent were obtained from the patients on their outpatient visit.

US data acquisition and the study protocol

The researcher supported the participants in self-performed bladder US. A handheld US device (iViz air; FUJIFILM, Tokyo, Japan) was used. A 3–5 MHz convex probe was used for bladder imaging at a depth of 15.0 cm in B mode. The self-performed bladder US was performed in the sitting position. The reason for this was that the sitting position was an easy position for performing PFMT and checking US videos.

The participants self-performed the US at least 30 min after the last urination. After showing the image of the bladder on display by self-performed US, the US video recording was started. In response to the researcher’s call, participants performed PFMT 4–10 times. PFM contraction and rest call times were recorded. Approximately one–three recordings were obtained according to the participants’ physical conditions and schedules. When possible, multiple videos were recorded for situations with different urine volumes. Other variables obtained were age, body mass index (BMI), number of deliveries, ICIQ-SF score, overactive bladder symptom score (OABSS) [23], experience with PFMT, and medical history.

Data processing

The videos obtained from the participants were trimmed based on the contraction time. The trimmed videos were blinded and classified by two experts with sufficient US experience. In this study, these classifications were used as the ground truth data. The classifications were undetermined, correct PFM contraction, PFM contraction failure, and no contraction. If the classifications of the two experts differed, the videos were excluded and considered unsuitable for the analysis.

Specific coordinates were obtained to extract features from the US video of the bladder. The bladder area was extracted from the images using an auto image processing system [20], as shown in Fig. 1. The values of the x- and y-coordinates at five points in the bladder area (center, the center point of the maximum bladder diameter; right/left, the point where the maximum bladder diameter intersects the bladder wall on both sides; top/bottom, the top and bottom of the point where the bladder wall intersects vertically from the center) were automatically extracted. These variables were extracted as time-series data every 10 frames. The videos were recorded at 24 fps.

Fig. 1
figure 1

Extracted bladder area. The yellow line surrounding the bladder was the result of bladder area extraction. The white circle point is the center point (Center) of the bladder’s maximum length diameter. Arrow points are the cross point of the bladder wall between the left and right bladder wall and the maximum length diameter (Left, Right). Arrowhead points are the point where the top and bottom bladder walls cross vertically from the center point (Top, Bottom). This US image shows the bladder area was extracted correctly

To generate the time series bladder features, the extracted features from trimmed videos were analyzed with the feature extraction algorithm Tsfresh [24] using Python 3.8.15 by Jupyter Lab. In the Tsfresh, time-series features such as the number of peaks, mean, minimum value, maximum value, and frequency were extracted from the bladder Center, Top, Bottom, Right, and Left. For the other features, the bladder area was calculated from the product of the distance between Right_x and Left_x and the distance between Top_y and Bottom_y as surrogate values for urine volume. The difference between the minimum and maximum values of Bottom_y was obtained as the bladder base elevation.

Machine learning

The virtual environments used were Pycaret 2.3.1, Numpy version 1.20.3, Pandas version 1.5.2, matplotlib version 3.6.2, scikit-learn version 0.23.2, and Python 3.8.15 by Jupyter Lab. This study used the Pycaret library [25] for supervised machine learning (ML) multiclass classification. An expert PFM contraction classification was used as the ground truth data. Expert classifications were merged with features as labeled data. We then divided the original data into 70% training data and 30% test data. We used five-fold cross-validation to avoid overfitting due to the lack of large datasets [26,27,28]. After the cross-validation test, 15 classifiers were created and the top two classifiers with the highest accuracy were selected. And Logistic regression was selected as the classical classifier for a reference. Hyperparameter tuning via a grid search approach was performed to optimize the performance of the model, and the classification model with the highest accuracy was selected. The outcome of this study was the performance of the developed algorithm based on the participants’ self-performed bladder videos. The evaluation parameters for this purpose were accuracy, area under the curve (AUC), Recall, Precision, and F1 value. To estimate the contribution of each feature variable to the model’s classification ability, the top 10 feature importance levels were also calculated.

Statistical analysis

Descriptive statistics were calculated for the participant characteristics. Categorical data were presented as percentages and continuous data were presented as means and standard deviations (SD).

Results

US data acquisition

Fifty-six participants were recruited for the study (Fig. 2). Table 1 presents the characteristics of the participants. A total of 233 bladder self-performed videos were recorded and trimmed to 1554 videos based on the contraction time, excluding videos that were classified differently by experts and researchers. Finally, 1144 videos were analyzed. The experts’ classification was as follows: undeterminable, 96; correct PFM contraction, 527; failure to correct PFM, 297; and no contraction, 224 (Fig. 3).

Fig. 2
figure 2

Flow diagram of participants. ICIQ-SF = International Questionnaire Urinary Incontinence Short Form. UI = Urinary Incontinence. ADL = Activities of Daily Living

Table 1 Characteristics of participants
Fig. 3
figure 3

Flow diagram of data for the machine learning process. PFM; Pelvic Floor Muscles. UI; Urinary Incontinence

Extraction features and machine learning

From 1144 videos for the training dataset (70%) and test dataset (30%). A total of 10,257 features were extracted. ML was performed with 7,894 features after excluding those with missing values (Fig. 3). As a result of classifier comparisons after the five-fold cross-validation test, the light gradient boosting machine (LightGBM) [28] and the random forest classifier showed high accuracy. The logistic regression accuracy was 0.47 (Table 2). LightGBM was selected because the highest accuracy and the other indices were also good, and the hyperparameters were tuned using a grid search in a five-fold cross-validation. The final classification model results were as follows: accuracy = 0.73, AUC = 0.91, Recall = 0.66, Precision = 0.73, and F1 = 0.73. The results for each classification are shown in Fig. 4 as a confusion matrix. In feature importance, the fast Fourier transformed Bottom_y feature produced the highest results. In addition, 5 of the top 10 features were related to Bottom_y. The top three features indicated the y-coordinates of the upper and lower portions of the bladder.

Table 2 Performance of each model based on 5-fold cross-validation
Fig. 4
figure 4

Confusion matrix for each estimation by LightGBM model in test data set. PFM; Pelvic Floor Muscles. LightGBM; Light Gradient Boosting Machine

Discussion

In this study, an estimate was developed to automatically determine the feedback of PFMT using ultrasound, a procedure that previously required advanced reading skills and could only be performed in hospitals by experts. The model was developed through supervised ML using the expert opinion of a sonographer as the ground truth, thus achieving a high level of performance with an accuracy of 0.73 and an AUC of 0.91. These results indicate the possibility of automatic evaluation of PFM contraction from US videos, similar to previous studies [21]. The novelty of this study is that it used transabdominal, transverse bladder videos, compared to previous study that used transperineal US videos [21]; automatic evaluation of contraction of PFM in four categories.; and the US movies used were all obtained from the participant’s self-performed.

In this study, supervised ML was applied to 1144 US videos of adult women. This model enabled four classifications (undeterminable, correct PFM contraction, failure to contract PFM, and no contraction) with a high accuracy. This model may help determine not only whether the PFMT was successful, but also in what way the contraction went wrong to help the patient understand how to modify his or her PFMT. For the 23 video images, the model classification exhibited correct PFM, although they were not properly contracted. These videos showed no evidence of pelvic floor muscle contraction; however, they contained abdominal wall movement due to abdominal breathing and subtle changes in probe movement, which may have led to misclassification due to changes in bladder position. Further improvement of the estimation performance is expected in the future by improving the system to be able to distinguish between fine movements such as respiratory variation, as well as by incorporating compensation for camera shake on the probe side.

Five of the top ten characteristics identified in this study were related to the bladder base. In previous studies, expert feedback using US images suggested the importance of bladder base motion, this result is consistent with the clinical findings and feature importance analysis [29, 30]. The most important feature was associated with the fast Fourier transform, which is a numerical representation of the amplitude and position of the waveform based on frequency component decomposition, representing the complexity of the time-series data. The top three features indicated the y-coordinates of the upper and lower portions of the bladder, and variations in the upper and lower portions of the bladder were significant determinants of PFM contraction.

This study has a few limitations. First, the number of participants who were diagnosed with UI or overactive bladder was small. Therefore, the ML dataset included only a limited number of bladder US videos obtained during PFMT for patients diagnosed with UI or overactive bladder. In order to utilize this algorithm as biofeedback for initial training for patients with severe symptoms, more US videos of patients with UI and overactive bladder should also be learned in the future. Second, the number of US videos obtained in this study was small. This was due to the limited recruitment period. However, conducting further analysis with an expanded dataset could potentially enhance the predictive accuracy of PFM contraction in future studies. In addition to the participants characteristics obtained in this study, additional personal data such as delivery information, smoking, and alcohol consumption, which could be attributed to urinary incontinence, could be obtained to develop an automated pelvic floor muscle evaluation system that is more clinically relevant. In addition, more than half of the participants in this study were recruited through snowball sampling. This was conducted because PFMT is not only used to treat UI symptoms but also as a preventive measure, and therefore we aimed to develop a more general-purpose system by recruiting community women as well as hospitals. Although snowball sampling may have introduced a selection bias in the target population, this study ultimately combined data obtained by snowball sampling and data obtained at the hospital to perform ML. Therefore, we believe that we developed an algorithm that excludes bias as much as possible.

This study revealed that four classifications of PFM contractions are possible based on women’s self-performed US videos. In this study, we used ML with bladder coordinate data for automatic evaluation of PFM contraction. This was because we thought that this would be the variable that would provide the best understanding of bladder movement in this new attempt to estimate movement from bladder videos acquired by self-performed US. However, there are other approaches that can estimate PFM contraction from bladder US videos, such as Convolutional Neural Network (CNN) and Vision Transformer. Although these methods require large amounts of data, they were not implemented in this study because the amount of data was limited. Additional research should be conducted in the future by adding more data and examining analysis techniques to further improve the accuracy of PFMT classification. Furthermore, the feasibility and effectiveness of this system for UI symptoms needs to be established.

The biofeedback method proposed in this study, combined with self-performed US, suggests that women with UI can receive PFMT biofeedback at home at any time. This is expected to help women maintain their health by improving and preventing UI and ultimately extending their healthy life expectancy through UI independence.

Conclusion

In this study, supervised ML was performed using bladder videos obtained from women’s self-performed US in order to develop an algorithm for automatic evaluation of PFM contractions. The results showed that the algorithm could automatically classify PFM contractions with a high performance of 0.73 accuracy and 0.91 AUC. The results of this study suggest a new biofeedback method for women requiring PFMT.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available due to the lack of consent from participants for the public release of research data, as well as the ongoing nature of the study which involves sensitive and confidential information but are available from the corresponding author on reasonable request.

References

  1. Haylen BT, De Ridder D, Freeman M, et al. An International Urogynecological Association (IUGA)/International Continence Society (ICS) joint report on the terminology for female pelvic floor dysfunction. Int Urogynecol J. 2010;21(1):5–26.

    Article  PubMed  Google Scholar 

  2. Mørkved S, Bø K. Effect of pelvic floor muscle training during pregnancy and after childbirth on prevention and treatment of urinary incontinence: a systematic review. Br J Sports Med. 2014;48(4):299–310.

    Article  PubMed  Google Scholar 

  3. PFWM R, HC K, De Gennaro M, et al. Urodynamic testing. In: Abrams P, Cardoza L, Khoury S, et, Al, editors. Editors. Incontinence 5th International Consultation on Incontinence. Plymouth, UK: Health Publication; 2013. pp. 429–506.

    Google Scholar 

  4. Avery C, Gill K, MacLennan H, Chittleborough R, Grant F, Taylor W. The impact of incontinence on health-related quality of life in a South Australian population sample. Aust N Z J Public Health. 2004;28(2):173–9.

    Article  PubMed  Google Scholar 

  5. Dumoulin C, Cacciari LP, Hay-Smith EJC. Pelvic floor muscle training versus no treatment, or inactive control treatments, for urinary incontinence in women. Cochrane Database Syst Rev. 2018;10(10):CD005654.

    PubMed  Google Scholar 

  6. Thompson JA, O’Sullivan PB. Levator plate movement during voluntary pelvic floor muscle contraction in subjects with incontinence and prolapse: a cross-sectional study and review. Int Urogynecol J. 2003;14(2):84–8.

    Article  Google Scholar 

  7. Galea MP, Tisseverasinghe S, Sherburn M. A randomized controlled trial of transabdominal ultrasound biofeedback for pelvic floor muscle training in older women with urinary incontinence. Australian New Z Cont J. 2013;19(2):38–44.

    Google Scholar 

  8. Herderschee R, Hay-Smith EJ, Herbison GP, Roovers JP, Heineman MJ. Feedback or biofeedback to augment pelvic floor muscle training for urinary incontinence in women. Cochrane Database Syst Rev. 2011;6(7):1–145.

    Google Scholar 

  9. Chiang H, Jiang H, Kuo C. Therapeutic efficacy of biofeedback pelvic floor muscle exercise in women with dysfunctional voiding. Sci Rep. 2021;11(1):1–8.

    Article  Google Scholar 

  10. Bertotto A, Schvartzman R, Uchôa S, Wender MCO. Effect of electromyographic biofeedback as an add-on to pelvic floor muscle exercises on neuromuscular outcomes and quality of life in postmenopausal women with stress urinary incontinence: a randomized controlled trial. Neurourol Urodyn. 2017;36(8):2142–7.

    Article  PubMed  Google Scholar 

  11. Okeahialam NA, Oldfield M, Stewart E, Bonfield C, Carboni C. Pelvic floor muscle training: a practical guide. BMJ. 2022;378:e070186.

    Article  Google Scholar 

  12. Dietz HP, Wilson PD, Clarke B. The use of perineal ultrasound to quantify levator activity and teach pelvic floor muscle exercises. Int Urogynecol J Pelvic Floor Dysfunct. 2001;12(3):166–9.

    Article  CAS  PubMed  Google Scholar 

  13. Ikeda M, Mori A. Vaginal palpation versus transabdominal ultrasound in the comprehension of pelvic floor muscle contraction after vaginal delivery: a randomised controlled trial. BMC Womens Health. 2021;21(1):1–9.

    Article  Google Scholar 

  14. Whittaker JL, Thompson JA, Teyhen DS, Hodges P. Rehabilitative ultrasound imaging of pelvic floor muscle function. J Orthop Sports Phys Ther. 2007;37(8):487–98.

    Article  PubMed  Google Scholar 

  15. Chehrehrazi M, Arab AM, Karimi N, Zargham M. Assessment of pelvic floor muscle contraction in stress urinary incontinent women: comparison between transabdominal ultrasound and perineometry. Int Urogynecol J Pelvic Floor Dysfunct. 2009;20(12):1491–6.

    Article  PubMed  Google Scholar 

  16. Kirkpatrick AW, Mckee JL, Couperus K, Colombo CJ. Patient self-performed point-of-care Ultrasound: using Communication technologies to empower patient self-care. Diagnostics. 2022;12(11):2884.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Dalewyn L, Deschepper E, Gerris J. Correlation between follicle dimensions recorded by patients at home (SOET) versus ultrasound performed by professional care providers. Facts Views Vis ObGyn. 2017;9(3):153–6.

    CAS  PubMed  Google Scholar 

  18. Kirkpatrick AW, McKee JL, Ball CG, Ma IWY, Melniker LA. Empowering the willing: the feasibility of tele-mentored self-performed pleural ultrasound assessment for the surveillance of lung health. Ultrasound J. 2022;14(1):2.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Matsumoto M, Tsutaoka T, Nakagami G, et al. Deep learning-based classification of rectal fecal retention and analysis of fecal properties using ultrasound images in older adult patients. Jpn J Nurs Sci. 2020;17(4):e12340.

    Article  PubMed  Google Scholar 

  20. Matsumoto M, Tsutaoka T, Yabunaka K, et al. Development and evaluation of automated ultrasonographic detection of bladder diameter for estimation of bladder urine volume. PLoS ONE. 2019;14(9):1–9.

    Article  Google Scholar 

  21. Haruna S, Danniyaer K, Kenta K, et al. Development of a method using transperineal ultrasound movies for automatic evaluation of the contraction of pelvic floor muscles in men after radical prostatectomy. J Nurs Sci Eng. 2022;9:242–52.

    Google Scholar 

  22. Gotoh M, Homma Y, Funahashi Y, Matsukawa Y, Kato M. Psychometric validation of the Japanese version of the International Consultation on Incontinence Questionnaire-Short Form. Int J Urol. 2009;16(3):303–6.

    Article  PubMed  Google Scholar 

  23. Homma Y, Yoshida M, Seki N, et al. Symptom assessment tool for overactive bladder syndrome-overactive bladder symptom score. Urology. 2006;68(2):318–23.

    Article  PubMed  Google Scholar 

  24. Christ M, Braun N, Neuffer J. Tsfresh documentation. https://tsfresh.readthedocs.io/en/latest/last. Accessed 12 Dec 2022.

  25. Evrimler S, Ali Gedik M, Ahmet Serel T, Ertunc O, Alperen Ozturk S, Soyupek S. Bladder Urothelial Carcinoma: machine learning-based computed Tomography Radiomics for Prediction of histological variant. Acad Radiol. 2022;29(11):1682–9.

    Article  PubMed  Google Scholar 

  26. Duda O, Peter E. Pattern Classi fi cation, 2nd Edition. Published online 2012.

  27. Varma S, Simon R. Bias in error estimation when using cross-validation for model selection. BMC Bioinformatics. 2006;7(91):1–8.

  28. Ke G, Meng Q, Finley T et al. LightGBM: a highly efficient gradient boosting decision tree. NIPS. 2017:3147–55.

  29. Whittaker J. Abdominal Ultrasound imaging of pelvic floor muscle function in individuals with low back Pain. J Man Manip Ther. 2004;12(1):44–9.

    Article  Google Scholar 

  30. Ubukata H, Maruyama H, Huo M. Reliability of measuring pelvic floor elevation with a diagnostic ultrasonic imaging device. J Phys Ther Sci. 2015;27(8):2495–7.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We would like to express our gratitude to Mr. Mikihiko Karube from FUJIFILM for his technical support. We appreciate the kind cooperation of all study participants.

Funding

This work was supported by JSPS KAKENHI Grant Numbers 22K19685, 20H00560, a grant from Beyond AI Institute at The University of Tokyo, and a grant from the Japanese Society of Wound, Ostomy, and Continence Management.

Author information

Authors and Affiliations

Authors

Contributions

The conception and design of the work were substantially contributed to by MM, TT, NT, MS, AK, HS, and GN. MM, TT, MS, AK, and GN were involved in the acquisition of data, while all authors contributed to the analysis of data. The manuscript was drafted and substantively revised by MM, TT, GN. Project development was led by NT, HS, and GN, with HS and GN also managing the project. Each author has approved the submitted version of the manuscript, agrees to be accountable for their own contributions, and has ensured the integrity of the work.

Corresponding author

Correspondence to Gojiro Nakagami.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Research Ethics Committee of the Graduate School of Medicine, The University of Tokyo (No. 2021256NI-[2]). In accordance with the approval of the ethical review board, each participant read and gave written informed consent. This process included informing participants about their right to refuse participation and the freedom to withdraw from the study at any time without penalty.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Muta, M., Takahashi, T., Tamai, N. et al. Pelvic floor muscle contraction automatic evaluation algorithm for pelvic floor muscle training biofeedback using self-performed ultrasound. BMC Women's Health 24, 219 (2024). https://doi.org/10.1186/s12905-024-03041-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12905-024-03041-y