Categories
www.sunflowerhospital.in

4 Common Myths About IVF Treatment


4 Common Myths About IVF Treatment

Despite the growing popularity of IVF (in-vitro fertilization) as a successful solution for infertility, there is a fair share of myths related to it.

While some believe that IVF cost in Ahmedabad and other places in India is beyond the capability of the common people, making cost one of the major factors why individuals don’t go for IVF treatment. Other than cost, the other major obstacle is the various misconceptions regarding the infertility treatment formed because of some common IVF myths that make them think that IVF treatment is harmful to the mother and baby.

These misconceptions prevent couples from seeking treatment at the correct time. Often they delay going to IVF specialists at a time when they can still conceive via treatment until it is too late. At the point when fertility begins to decline, it is rather rapid, giving a very short time when treatment can give you great outcomes.

Obviously, one must consider treatment only after that has precise information and to understand myths and facts of IVF Treatment, the wisest decision is to counsel with the infertility specialist from one of the best IVF centers in Gujarat, India.

The PCOS syndrome leads to anovulation due to increased levels of androgen (male hormone) and insulin resistance. A few signs of PCOS are evident from the time a female gets her first period, while some symptoms are misleading and point at some other ailments, while some women only find out about this when they run different tests while facing issues in pregnancy.

Commonly believed myths about IVF:

Myth 1: IVF is never successful at first try

The success rate of the IVF depends on many factors like woman’s age, quality of eggs, sperms and the resulting embryo (developing lives) formed. Tubal factors and uterine conditions determine implantation chances as well as the overall health of the woman’s body to carry out the pregnancy.

Despite the fact that it is difficult to predict if a woman will succeed in the first try or will require a few attempts, results demonstrate that

  • On an average 70% of IVF cases conceive in the first attempt
  • 85% conceive in two attempts
  • 95% conceive 3 attempts

Myth 2: IVF is painful

This is the most common myth that IVF is painful. Very few women have complained about any sort of pain during their treatment. Pick up and embryo transfers are done under general anesthesia, take’s about 15 minutes and assume normal activities within 4-6 hours. Some women believe that IVF injections are painful but this myth is totally unfounded. Because everyone reacts to things like needles, ultrasounds, and vaginal catheters differently, pain is a hard thing to measure. But, you will not feel any unexpected or extreme torment and any discomfort. However, if you do experience pain you should report to your IVF specialist immediately.

Myth 3: IVF requires complete bed rest

Every couple who have come in for IVF treatment have asked this question. We have working women who come for Ovum pick and return to work the same day or next day itself. Women who have continued normal activities and work within 1-3 days of transfer and kept on working throughout pregnancy. There is no need to treat an IVF pregnancy any different from a natural pregnancy. You don’t need to restrict yourself to bed after the embryo transfer.

However that said, a certain amount of care is advised in any normal pregnancy and that needs to be followed. Avoid lifting heavy objects and excessive physical exertion. You can take up pregnancy yoga to strengthen the body and prepare for delivery.

Myth 4: IVF babies are not Normal

One of the most baseless Myths is that IVF will have issues later in life. There have been millions of babies born around the world through assisted reproduction in the last three decades. There has been no scientific proof to demonstrate this myth right. In fact, IVF and recent advances in ART have made it possible to screen genetic abnormalities in embryos and assist couples with a family history of disorders to have normal children. IVF children are the same as naturally conceived children.

Even babies who are conceived normally are vulnerable to birth defects. Babies born with IVF have a lower risk of birth defects. This is because the selection procedure of oocytes and sperms ensures that good quality oocytes and sperms are chosen for fertilization. Notwithstanding that, advanced techniques like ICSI is utilized if the partner’s sperm seems abnormal or in case the partner has a low sperm count. Under this procedure, the IVF specialists select one single healthy sperm for fertilization. This incredibly lessens the risk of any birth defects and abnormalities. Also, IVF specialists fertilize multiple embryos and transfer only the healthy one to the uterus.

Categories
stackify.com

Spring AOP Tutorial With Examples

You may have heard of aspect-oriented programming, or AOP, before. Or maybe you haven’t heard about it but have come across it through a Google-search rabbit hole. You probably do use Spring, however. So you’re probably curious how to apply this AOP to your Spring application.

In this article, I’ll show you what AOP is and break down its key concepts with some simple examples. We’ll touch on why it can be a powerful way of programming and then go into a contrived, but plausible, example of how to apply it in Spring.  All examples will be within a Spring application and written in JVM Kotlin, mainly because Kotlin is one of my favorite useful languages.

Quick Description of AOP

“Aspect-oriented programming†is a curious name. It comes from the fact that we’re adding new aspects to existing classes. It’s an evolution of the decorator design pattern. A decorator is something you hand-code before compiling, using interfaces or base classes to enhance an existing component. That’s all nice and good, but aspect-oriented programming takes this to another level. AOP lets you enhance classes with much greater flexibility than the traditional decorator pattern. You can even do it with third-party code.

The Parts of Spring AOP

In AOP, you have a few key parts:

  • Core component.
  • This is the class or function you want to alter. In Spring AOP, we’re always altering a function. For example, we may have the following command:

    @Component
    class PerformACommand {
        @Logged
        fun execute(input: String): String {
            return "this is a result for $input"
        }
    }
    

  • Aspect.
  • This is the new logic you want to add to the existing class, method, sets of classes, or sets of methods. A simple example is adding log messages to the execution of certain functions: 

    @Aspect
    @Component
    class LoggingAspect {
    
        @Around("@annotation(Logged)")
        fun logMethod(joinPoint: ProceedingJoinPoint) {
            var output = joinPoint.proceed()
            println("method '${joinPoint.signature}' was called with input '${joinPoint.args.first()}' and output '$output'")
        }
    }
    
    • JoinPoint. OK, now the terms get weird. A JoinPoint is the place within the core component that we’ll be adding an aspect. I’m putting this term here mainly because you’ll see it a lot when researching AOP. But for Spring AOP, the JoinPoint is always a function execution. In this example, it will be any function with an “@Logged†annotation: 
    @Target(AnnotationTarget.FUNCTION)
    annotation class Logged
    • Pointcut. The pointcut is the logic by which an aspect knows to intercept and decorate the JoinPoint. Spring has a few annotations to represent these, but by far the most popular and powerful one is “@Around.†In this example, the aspect is looking for the annotation “Logged†on any functions. 
    @Around("@annotation(Logged)")

    If you wire the example code up to a Spring application and run:

    command.execute("whatever")

    You’ll see something like this in your console: “method ‘String com.example.aop.PerformACommand.execute(String)’ was called with input ‘whatever’ and output ‘this is a result for whatever’â€

    Spring AOP can achieve this seeming magic by scanning the components in its ApplicationContext and dynamically generating code behind the scenes. In AOP terms, this is called “weaving.â€

    green leafed plant in pot

    Why AOP Is Useful

    With that explanation and examples providing understanding, let’s move on to the favorite part for any programmer. That’s the question “why?†We love this question as developers. We’re knowledge workers who want to solve problems, not take orders. So, what problems does AOP solve in Spring? What goals does it help one achieve?

    Quick Code Reuse

    For one thing, adding aspects lets me reuse code across many, many classes. I don’t even have to touch much of my existing code. With a simple annotation like “Logged,†I can enhance numerous classes without repeating that exact logging logic.

    Although I could inject a logging method into all these classes, AOP lets me do this without significantly altering them. This means I can add aspects to my code in large swaths quickly and safely.

    Dealing With Third-Party Code

    Let’s say normally I want to inject shared behavior into a function that I then use in my core components. If my code is proved by a third-party library or framework, I can’t do that! I can’t alter the third-party code’s behavior. Even if they’re open source, it’ll still take time to understand and change the right places. With AOP, I just decorate the needed behavior without touching the third-party code at all. I’ll show you exactly how to do that in Spring with the blog translator example below.

    Cross-Cutting Concerns

    You’ll hear the term “cross-cutting concerns†a lot when researching AOP. This is where it shines. Applying AOP lets you stringently use the single responsibility principle. You can surgically slice out the pieces of your core components that aren’t connected to its main behavior: authentication, logging, tracing, error handling, and the like. Your core components will be much more readable and changeable as a result.

    Example: A Blog Translator

    Although I showed snippets of a logging aspect earlier, I want to walk through how we might think through a more complex problem we might have, and how we can apply Spring AOP to solve it.

    As a blog author, imagine if you had a tool that would automatically check your grammar for you and alter your text, even as you write! You download this library and it works like a charm. It checks grammar differently based on what part of the blog post you’re on: introduction, main body, or conclusion. It heavily encourages you to have all three sections in any blog post.

    You’re humming along, cranking out some amazing blog posts, when a client commissions a request: can you start translating your blogs to German to reach our German audience better? So you scratch your head and do some research. You stumble upon a great library that lets you translate written text easily. You tell the client, “Yes, I can do that!†But now you have to figure out how to wire it into your grammar-checking library. You decide this will be a great case to try out Spring AOP to combine your grammar tool with this translation library.

    MacBook Pro beside plant in vase

    Wiring It Up

    First, we want to add the Spring AOP dependency to our Spring Boot project. We have a “build.gradle†file to put this into:

    dependencies {
     implementation("org.springframework.boot:spring-boot-starter")
     implementation("org.springframework.boot:spring-boot-starter-aop")
    }
     

    Analyzing Our Core Components

    Before we implement anything, we take a close look at our tool codebase. We see that we have three main components, one for each section of a blog post:

    class IntroductionGrammarChecker {
        fun check(input: BlogText): BlogText {
           ...
        }
    }
    
    class MainContentGrammarChecker {
    
        fun check(input: BlogText): BlogText {
           ...
        }
    }
    
    class ConclusionGrammarChecker {
        fun check(input: BlogText, author: Author): BlogText {
            ...
        }
    }

    Hmm…it looks like each one produces the same output: a BlogText. We want to alter the output of each of these checkers to produce German text instead of English. Looking closer, we can see that they all share the same signature. Let’s keep that in mind when we figure out our pointcut.

    The Core Logic

    Next, let’s bang out the core logic of our aspect. It’ll take the output of our core component, send it through our translator library, and return that translated text:

    @Aspect
    @Component
    class TranslatorAspect(val translator: Translator) {
    
        @Around("execution(BlogText check(BlogText))")
        fun around(joinPoint: ProceedingJoinPoint): BlogText {
            val preTranslatedText = joinPoint.proceed() as BlogText
            val translatedText = translator.translate(preTranslatedText.text, Language.GERMAN)
            return BlogText(translatedText)
        }
    }

    Note a few things here. First, we annotate it with “@Aspect.†This cues Spring AOP in to treat it appropriately. The “@Component†annotation Spring Boot will see it in the first place.

    We also use the “@Around†pointcut, telling it to apply this aspect to all classes that have a method signature of “check(BlogText): BlogText.†There are numerous different expressions we can write here. See this Baeldung article for more. I could’ve used an annotation, like the “@Logged†above, but this way I don’t have to touch the existing code at all! Very useful if you’re dealing with third-party code that you can’t alter.

    The method signature of our aspect always takes in a ProceedingJoinPoint, which has all the info we need to run our aspect. It also contains a “proceed()†method, which will execute the inner component’s function. Inside the function, we proceed with the core component, grabbing its output and running it through the translator, just as we planned. We return it from the aspect, with anything that uses it being none the wiser that we just translated our text to German.

    A Trace of Something Familiar

    Now that you’re familiar with Spring AOP, you may notice something about the “@Logged†annotation. If you’ve ever used custom instrumentation for Java in Retrace, you may notice it looks a lot like the “@Trace†annotation.

    The similarity of “@Logged†to “@Trace†is not by coincidence. “@Trace†is a pointcut! Although Retrace does not use spring AOP per se, it does apply many AOP principles into how it lets you configure instrumentation.

    The Final Aspect

    We’ve only touched the surface of AOP in Spring here, but I hope you can still see its power. Spring AOP gives a nonintrusive way of altering our components, even if we don’t own the code for that component! With this, we can follow the principles of code reuse. We can also implement wide-sweeping, cross-cutting concerns with just a few lines of code. So, find a place in your Spring application where this can bring value. I highly recommend starting with something like “@Logged†or “@Trace†so you can easily measure and improve your system performance.

    Categories
    stackify.com

    AWS Fargate Monitoring

    As companies evolve from a monolithic architecture to microservice architecture, some common challenges often surface that companies must address during the journey.

    In this post, we’ll discuss one of these challenges: observability and how to do it in AWS Fargate.

    A Little History Before Talking AWS Fargate Monitoring

    Twenty years ago, applications were different from what they are today. They were fragile and slow, and they lacked automation. Developers would package software manually. With the aid of file transfer programs, application code landed on servers. Each of these servers had a name and tags. Then, when a server went off, everybody scrambled to resolve the problem and bring back business online. New releases took days to roll out, and businesses could afford minutes or even an hour of downtime.

    As the demands for faster, highly available applications grew, things changed. Businesses had one choice: more rapid innovation and frequent software releases.

    Businesses grew from serving a small, finite set of people in an office location to serving a multitude across multiple geographical areas. A minute of downtime could mean losing millions of dollars.

    With more tools becoming available to developers, container technology like Docker, which allows you to package your app once and run it everywhere, became widespread. And modern applications moved away from traditional, monolithic architecture to microservice architecture, where your services could span hundreds of services.

    While modern applications have taken a better turn, nevertheless, they are not without challenges.

    The Challenges of Modern Applications

    From development to production, modern applications come with their challenges. The challenges include server provisioning, scaling, observability, security, and maintenance. For small businesses, it could be challenging to find the right talent who can stand up to these challenges.

    In a world that’s changing fast and with disruption happening in every sector, a business that leads today may be overtaken by another tomorrow. You want to make sure you leverage technologies, delivering real business values while optimizing costs.

    One such technology is AWS Fargate. In 2017, AWS, a leader in cloud computing, introduced Fargate as a new way of running containers in the cloud. This further abstracted complexities away.

    AWS Fargate allows you to deploy container workloads without thinking about EC2 (servers) instances or container orchestration. The grunt work of managing EC2 instances or scaling is taken care of by AWS.

    With Fargate, now your only focus is building your Docker images and defining some basic requirements for your application. Do that, and you can go to bed with no worries.

    As exciting as AWS Fargate sounds, having 360-degree visibility into your apps could still be a problem without the right monitoring tools. For the rest of this post, we’ll look at AWS Fargate monitoring using Retrace.

    What Is Retrace?

    Retrace is SaaS APM that gives you granular visibility into your applications. Just as its name indicates, it allows you to monitor and troubleshoot applications faster.

    Knowing that, let’s look at how Retrace can help you gain deep visibility into an AWS Fargate application.

    First things first. To use Retrace, you need an activation key. The Retrace container agent uses this key to authenticate to the Retrace server. You can get an activation key by signing up for Retrace trial account. After registration, you’ll receive your activation key by email. Please note it; you’ll need it later on.

    For the demo app, we’ll create a simple Docker container from a Node.js app, deploy to Fargate, and monitor the app using Retrace.

    Creating the Demo App

    Now, let’s proceed to create a simple Node.js container application for this demo.

    First, install express-generator by executing this:

    $ npm install -g express-generator

    Next, generate the application boilerplate with this:

    $ express fargate-app

    If you ran the command above, then your directory structure should look like this:

    AWS Fargate monitoring tree

    Now, execute the command below to install the app dependencies:

    $ npm install

    For error logging in the application, we need to install a Retrace APM package. You can install it with this command:

    $ npm install stackify-node-apm

    After installation, it’s important to copy node_modules/stackify-node-apm/stackify.js to the root folder of the application. This is where you define your application environment and Retrace’s activation key.

    So, execute the command below to copy:

    $ cp node_modules/stackify-node-apm/stackify.js ./

    And when you’ve done that, then update the content of the file to this:

    //stackify.js
    exports.config = {
    /**
    * Your application name.
    */
    application_name: process.env.APPLICATION_NAME,
    /**
    * Your environment name.
    */
    environment_name: process.env.STACKIFY_ENV
    }

    Finally, open app.js and add this line at the top in order to initialize the Retrace profiler:

    //app.js
    require('stackify-node-apm')

    Okay, onto the next step. We’ll now build a Docker container image that AWS Fargate can run.

    Creating a Docker Container Image

    In order to build the Docker image for the app, we need to create a Dockerfile. A Dockerfile is a text file that contains instructions on how to assemble a Docker image. Ideally, you should have Docker installed for this step.

    To keep things simple, I’ll not go into details how the Docker build works. You can learn more about it here.

    At the root directory of the app, create a Dockerfile with this content:

    # filename : Dockerfile
    FROM node:12.2.0-alpine
    WORKDIR /usr/app
    COPY package*.json ./
    RUN npm install
    COPY . . 
    CMD [ "npm", "start" ]

    Then build a Docker image by running this:

    $ docker build .

    Usually, the command above will take a couple of minutes to complete the building process.

    On successful completion, run the command below to see the Docker image you just built:

    $ docker image ls

    The output will be similar to the image below. Note the IMAGE ID.

    specs

    At this point, the image still resides in your local environment. You’ll need to push it to a Docker registry.

    A Docker registry is a repository where you can store one or more versions of your Docker image. For this demo, we’ll use DockerHub as our Docker registry.

    Before you can push a Docker image to DockerHub, you need to tag it.

    You can do so with this command:

    $ docker tag <image-id> <docker-registry-username/image-tag>

    Ensure you replace the image-id with your Docker IMAGE ID mentioned previously. You’ll also want to replace the docker-registry-username with your registry username and the image-tag with the tag.

    $ docker tag 0b2b699a0c8e abiodunjames/stackify-node

    Now, we’ll push the image to DockerHub with this command:

    docker push abiodunjames/stackify-node

    With the Docker image in a Docker registry, the next step is creating a task definition on AWS ECS.

    Creating a Task Definition

    A task definition is the first step to running a container workload with AWS Fargate. I like to think of a task definition as a design plan for an application. It’s where we define the images to use, the number of containers for the task, the memory allocation, etc.

    But Retrace will not automatically monitor your app. AWS Fargate monitoring with Retrace requires you to define the Retrace container agent in the same task definition as your application.

    A task definition for our demo app and Retrace container agent would look like this:

    {
        "containerDefinitions": [
          {
            "cpu": 1,
            "environment": [
              {
                "name": "STACKIFY_DEVICE_ALIAS",
                "value": "Aws Fargate"
              },
              {
                "name": "STACKIFY_ENV",
                "value": "Production"
              },
              {
                "name": "STACKIFY_KEY",
                "value": "YOU-STACKIFY-KET"
              }
            ],
            "mountPoints": [
              {
                "containerPath": "/var/stackify",
                "sourceVolume": "stackify"
              }
            ],
            "image": "stackify/retrace:latest",
            "readonlyRootFilesystem": false,
            "name": "stackify-retrace"
          },
          {
            "portMappings": [
              {
                "hostPort": 80,
                "protocol": "tcp",
                "containerPort": 80
              }
            ],
            "environment": [
              {
                "name": "PORT",
                "value": "80"
              },
              {
                "name": "APPLICATION_NAME",
                "value": "fargate-app"
              },
              {
                "name": "STACKIFY_KEY",
                "value": "YOUR-STACKIFY-KEY"
              }
            ],
            "mountPoints": [
              {
                "containerPath": "/usr/local/stackify",
                "sourceVolume": "stackify"
              }
            ],
            "memory": 128,
            "image": "abiodunjames/stackify-node:latest",
            "name": "fargate-app"
          }
        ],
        "memory": "512",
        "family": "my-fargate-app",
        "requiresCompatibilities": [
          "FARGATE"
        ],
        "networkMode": "awsvpc",
        "cpu": "256",
        "volumes": [
          {
            "name": "stackify"
          }
        ]
      }
    

    Let’s put the code snippet above in a file and name it task-definition.json. But make sure to replace the placeholder YOUR-STACKIFY-KEY with your activation key.

    In the task definition, we defined two container images: stackify/retrace and abiodunjames/stackify-node (the Docker image we built). We also defined some environment variables like APPLICATION_NAME, STACKIFY_ENV and STACKIFY_KEY. These variables are passed into the containers.

    To create this task on AWS, you’ll have to execute the following command using AWS CLI:

    $ aws ecs register-task-definition --cli-input-json file://path/to/task-definition.json

    If successful, the description of the task definition will be output in your console. If you list your task definition using the command

    $ aws ecs list-task-definitions

    then you should get an output similar to this:

    {    
    "taskDefinitionArns": [
            "arn:aws:ecs:eu-west-1:xxxxxxx:task-definition/my-fargate-app:1"
        ]
    }

    Creating a Service

    Before we create the service, let’s create a cluster to run the service in. You can see a cluster as a logical group of resources (tasks and services).

    To create a cluster, run this command:

    $ aws ecs create-cluster --cluster-name my-fargate-cluster

    Now, let’s run the task with this command:

    $ aws ecs create-service --cluster my-fargate-cluster --service-name my-service --task-definition my-fargate-app --desired-count 1 --launch-type "FARGATE" --network-configuration "awsvpcConfiguration={subnets=[your-subnet-id],securityGroups=your-security-group-id,assignPublicIp=ENABLED}"

    The command above uses a few arguments. Let’s take a moment to look at them.

    • — cluster: the cluster to create the service in.
    • — service-name: the name to give to the service. For this example, we named it my-service.
    • — task-definition: the task definition to use and we used the task definition we created earlier.
    • — desired-count: the number of replicas of our service.
    • — launch-type: the launch type.
    • — network-configuration: where we defined the subnet to create the service in and the security group to use. It accepts subnet-id and the desired security group-id.

    Also, we enabled automatic public IP assigning.

    If you log in to the AWS console, you should see the service you just created running in the cluster, as shown in this image:

    AWS Fargate monitoring cluster
    AWS Fargate monitoring retrace dashboard

    And if you log in to your Retrace dashboard, you should see metrics from your application in real time. Retrace provides a rich dashboard where you can access your application logs.

    You can drill down to see details of each log.

    In addition to application logs, Retrace also monitors your system health, availability, and resource utilization like CPU, memory, and network usage. You can see the information in your Retrace dashboard

    AWS Fargate monitoring retrace

    AWS Fargate Monitoring Summary and More Resources

    To summarize, we’ve learned all about AWS Fargate monitoring: how you can use the Retrace container agent to monitor your Node.js application running in AWS Fargate in real time. 

    Retrace also supports other programming languages, like .NETJavaPHPRubyPython, and EC2 launch type. To build on this knowledge, I recommend you head on to Retrace official documentation.