Hi, Dani, Very cool! Do you still have any residual hearing in your CI ear? --Steve
@Dani I have no idea where I’ve been over the last 2 weeks I’ve missed this update. I’m very happy to read that your doing very well with your CI and rehab. It’s slow progress but so worth it just not to have to blow your brain with concentration trying to understand others.
I love your zebra skins on the magnet. All you men have one up on the ladies, you can see your wonderful and different skins. Where as mine is covered under my hair!!
I’ve had my 6 month follow up on Thursday last week. WRS did not change on the paper. But my real-life feeling is that it has pushed dramatically since June. On the other hand the tests are always just right after I got a new mapping. So I couldn’t get used to the new sound. And that has changed.
In my 1st and 2nd program I’ve changed 2 centre frequencies (out of 12) so that octaves really sound like octaves. During my last session we made a good job to reach this goal in my first two programs. Now we changed all my 4 programs to the same centre frequencies.
My previous 1st program was backuped at my 4th position.
My new 3rd program is still a copy from the first follow up in April which has more bass and less trebble than the new 4th program.
My new 1st and 2nd programs are the combination of #3 and #4, i.e. more trebble than #3 and more bass than #4. #1 differs from #2 in that #1 has no filters activated and is turned on in the vast majority during the day.
Now when I like to listen to music I switch to program 3 for less highs. This sounds less warbling than my default. When i am in difficult environment and want to listen to people talking then I switch to my 4th program with lesser low frequencies. This eliminates unwanted noise dramatically. My 2nd is just a fall back in case I get too much noise at once. Statistics (from the app) says I use #1 in 95% of the normally 14.6h day.
I wanted to get a “real” music program where I can hear silent notes at the same time like loud notes. My audiologist commented that none of his patients were happy with such a change because a Med-EL CI only delivers a “static” dynamic range of about 6dB at the same time (that means you can hear a sound that must have at least half of the sound pressure of any other sound to be heard at the same time) - if he changes the compression strategy then I will no longer be able to distinguish loudness of loud noise. I didn’t try this change. I am already happy that I can enjoy music with hi-hat, drums and especially snare drums.
You are doing absolutely great.
Your thought process concerning tuning is very interesting.
Thank you for the update.
@Dani So I’m curious about this dynamic range you mention as per the Cochlear Implant Help dot com comparison chart (https://cochlearimplanthelp.files.wordpress.com/2020/01/cochlearimplantcomparisonchart_v10.1e.pdf) the MED-EL implants currently have a 75 dB input dynamic range?
Thanks for the follow up. Your progress continues to be outstanding. What types of music or instruments are easier and harder for you to hear?
Are you still planning your second CI early next year?
This 6dB output range is what my audiologist told me. I guess that a SP uses the loudest hearable slice of around 18dB from 75dB Input range, compresses this by factor 3 and maps it to those 6dB “output”. But maybe I am wrong.
Edit: maybe these 6dB range applies to my current threshold and mpo values? I don’t know.
On the other hand I am hearing the tree’s leafs in the very light wind (<30dB SPL). As soon as someone is talking (>50dB SPL) these leafs are away and come back in a fraction of a second if the person stops talking. You may check similar for yourself. The compression strategy by my SP is working ways better than my Naida B70 ever did. My CI on my right sounds much more natural to me than my Naida on my left. The latter sounds overpowered although I can’t decrease volume without loosing any comprehension. I also can’t increase otherwise it hurts. When I use Enzo2 as my left HA it sounds more equal to my CI but I don’t understand anything on the phone with only my Enzo. With only my Naida I am able to have conversations on the phone.
However this compression strategy is working better in all manufacturer’s Soundprocessors than in hearing aids, not only by Med-EL Don’t know about EAS in this point.
@StevenS I am planing my 2nd CI as soon as I can get it. I can’t expect it. I want it having done now or at least tomorrow.
You are making amazing progress @Dani. I’m so happy for you that you can listen to music now and enjoy it. A lot of people can’t. You understand far more about your mappings than I do, I have no clue as to what they are doing. It sounds ok so I don’t delve into all the technical details. It’s way above my comprehension, as I’m not a IT person
I used to love listening to music before my “mappings from hell” happened. My appointments to get this rectified keep being cancelled on me due to Covid.
Thank you Deaf_piper!
It’s not that I am an IT professional. It’s that I am used to experiment with audio material. I knew what’s physically happening before having my CI already. Because of my hearing loss and the inability of my HA dispenser I was so frustrated that I did DIY so now I know very well what effects low and high frequencies have and their dependencies to each other. In fact i think fitting a CI is much simpler because there is no wave (sound pressure) but only single notes that a SP has to transfer to the CI. In a CI you can cut low frequencies all the way without interferencing higher frequencies to increase intelligibility (however sound quality is something else…). This is not possible with HAs due to sound pressure reaching one’s residual hearing on its direct way (on the other hand: as long as one has residual hearing it sounds more natural).
Moving those centre frequencies is great. But if it weren’t possible then my CI were okay, too. It’s a matter of getting used to the sound. Training is easier the more you listen with the CI only. That’s one reason why I want my second CI as soon as possible.
Music: It’s not that I love music like I did in earlier times. There are tracks that sound very similar as I am used to. But there are also tracks that I can’t find out with my CI which song it is. Since my CI I learned to love to listen to drums. But my main goal still is easy listening to conversations.
Awesome news, congrats on your next step! You really have a flexible brain
Oh, I have to have a flexible brain. I am married
I think I need to get some additional detail on this idea of changing center frequencies. Is this something along the lines of selecting each individual electrode and making sure that they each respond to external stimuli that are far separated tonally from the others?
hope I understand you correctly.
My audiologist gives me the reports from the fitting software. There you can see the center frequency for each electrode. That means that this frequency is stimulated by only this electrode. Frequencies between two center frequencies are stimulated by two neighbour electrodes with different intensities depending on the frequency.
I believe that my cochlea is longer than the longest electrode array (FlexSoft, 26.4 mm active length) so that low frequencies matches nearly the correct position whereas electrodes for higher frequencies are too far within the cochlear, i.e. the sound is too low. If your electrode array length fits to your cochlear then there should be no need to adjust center frequencies.
I didn’t know there were reports from the fitting software, that’s interesting. Never received those.
How do you tell though that your electrode is matched to the location? You have to do some kind of test with a combination of audiometry pure tones and mapping yes?
Correct. I’ve tried to listen to the given center frequencies (sinus wave) on my CI side via streaming and find a matching frequency using another device streaming to my HA on my left. In fact mathematically I missed 7 halftones out of 12 between 1kHz and 2kHz (=1 octave). I gave these found frequencies to my audiologist and he tried to use the values as new center values. Now voices sound much more used to me than before my last session in June. Voices have “sound” rather than a robotic style. It’s still not perfect but it is ok now.