The URV coordinates an international project that will improve the detection and prevention of breast cancer recurrence

A research group from the Rovira i Virgili University, led by the Professor Domènec Puig, classifies the types of breast cancer and predict the probability of metastasis.

Domènec Puig, on the left, and Hatem Rashwan, two of those responsible for the BosomShield project on behalf of the URV

The Universitat Rovira i Virgili is taking the lead in an ambitious international project known as BosomShield, with the potential to play a pivotal role in the detection and prevention of breast cancer. At its core, BosomShield is focused on the development of a sophisticated software platform designed to analyze both radiological and histopathological images. This groundbreaking approach brings together the analysis of traditional imaging methods like mammograms and MRIs with the examination of microscopic cell-level images. By amalgamating these two types of images, the project seeks to enhance the precision of breast cancer classification, predict the severity of the condition, and estimate the probability of metastasis recurrence. This collaborative endeavor is spearheaded by the URV’s Laboratory of Intelligent Robotics and Vision, led by researcher Domènec Puig, and is a part of the ITAKA research group within the Department of Computer Engineering and Mathematics. Notably, BosomShield enjoys the support of universities, hospitals, biomedical research groups, and technology centers in Europe, Asia, and North America, with funding secured from the European Union’s Marie-Sklodowska Curie Actions program, set to run until 2026.

BosomShield comprises ten distinct subprojects, each undertaken by one of the collaborating institutions, addressing various stages of the process. These encompass the analysis of radiological and histopathological images, prediction of recurrence possibilities, and platform design. Doctoral candidates selected by the participating institutions lead each subproject, fostering an enriching international exchange of expertise among researchers. The URV team, for instance, is overseeing the first subproject, aiming to determine the molecular subtype of breast cancer through multimodal radiological images. This endeavor leverages deep learning and artificial intelligence to identify tumor markers in radiological images, providing crucial insights into the potential danger and likelihood of recurrence, particularly in collaboration with Swedish partners. Ultimately, the BosomShield project aspires to create a practical clinical platform accessible within hospitals, offering specialists alerts and valuable assistance in making well-informed and efficient decisions regarding breast cancer diagnosis and treatment. It represents the culmination of collaborative efforts between the URV and IISPV, signaling a significant stride towards a universal and effective breast cancer diagnosis system.

Read More


Saddam Abdulwahab defended his PhD

Supervised Monocular Depth Estimation Based on Machine and Deep Learning Models

Abstract: Depth Estimation refers to measuring the distance of each pixel relative to the camera. Depth estimation is crucial for many applications, such as scene understanding and reconstruction, robot vision, and self-driving cars. Depth maps can be estimated using stereo or monocular images. Depth estimation is typically performed through stereo vision following several time-consuming stages, such as epipolar geometry, rectification, and matching. However, predicting depth maps from single RGB images is still challenging as object shapes are to be inferred from intensity images strongly affected by viewpoint changes, texture content, and light conditions. Additionally, the camera only captures a 2D projection of the 3D world. While the apparent size and position of objects in the image can change significantly based on their distance from the camera.

Stereo cameras have been deployed in systems to obtain depth map information. Although it shows good performance, but its main drawback is the complex and expensive hardware setup it requires and the time complexity, which limits its use. In turn, monocular cameras have become simpler and cheaper; however, single images always need more important depth map information. Many approaches to predict depth maps from monocular images have recently been proposed, thanks to the revolution of deep learning models. However, most of these solutions result in blurry approximations of low-resolution depth maps. In general, depth estimation requires knowing the appropriate representation methods to extract the shared features in a single RGB image and the corresponding depth map to get the depth estimation.

Consequently, this thesis attempts to contribute into two research lines in estimating depth maps (also known as depth images): the first line estimates the depth based on the object present in a scene to reduce the complexity of the complete scene. Thus, we developed new techniques and concepts based on traditional and deep learning methods to achieve this task. The second research line estimates the depth based on a complete scene from a monocular camera. We have developed more comprehensive techniques with a high precision rate and acceptable computational timing to get more precise depth maps.

Read More

WhatsApp Image 2023-04-21 at 4.06.15 PM

Nasibeh Saffari defended her PhD


Abstract: Mammographic breast density (MBD) reflects the amount of fibroglandular area of breast tissue that appears white and shiny on mammograms, commonly known as percent breast density (PD%). MBD is a risk factor for breast cancer and a risk factor for masking tumors. However, accurate estimation of BMD with visual assessment remains a challenge due to poor contrast and significant variations in background adipose tissue in mammograms. In addition, the correct interpretation of mammography images requires highly trained medical experts: It is difficult, laborious, expensive and prone to errors. However, dense breast tissue can make breast cancer more difficult to identify and be associated with a higher risk of breast cancer. For example, women with high breast density compared to women with low breast density have been reported to have a four to six times greater risk of developing the disease. The main key to breast density computation and breast density classification is to correctly detect dense tissues in mammographic images. Many methods have been proposed to estimate breast density; however, most are not automated. In addition, they have been severely affected by low signal-to-noise ratio and density variability in appearance and texture. It would be more helpful to have a computer-aided diagnosis (CAD) system to help the doctor analyze and diagnose it automatically. The current development of deep learning methods motivates us to improve the current breast density analysis systems. The main focus of this thesis is to develop a system to automate breast density analysis (such as; Breast Density Segmentation (BDS), Breast Density Percentage (BDP) and Breast Density Classification ( BDC) ), using deep learning techniques and applying it to temporal mammograms after treatment to analyze breast density changes to find a dangerous and suspicious patient.

Read More