CI recipients, how are you all going?

It’s been two and a half months for me since activation.I understand speech very well in quiet one on one and sometimes do well with 2or3 people in quiet. People that I regularly talk to I understand well. New people I usually have to listen real hard. The robotic sounding voices have slowly faded away, especially the familiar ones. My wife’s voice is almost how I remember it.

I seem to understand female voices better. Most male voices sound deep and similar, which makes it tough in group conversation as I am constantly looking to see who’s talking.

Hearing birds is amazing. They seem so loud, how could I not hear them before? And yes, being bimodal I don’t know where they are.

Haven’t been tested yet for word recognition but I go in for a mapping this Wednesday. Last mapping I understood my audiologist with a mask on. She enunciates very clearly and usually wears a clear face mask.

I am very happy with my results and glad this amazing technology is available. I am eligible for the other ear and I am strongly considering it in the near future.

4 Likes

did you do any type of auditory rehab exercises or make use of the streaming devices?I have Kanso 2 by cochlear. Almost 2 yrs no benefit. All I hear is static sound. No word recognition. I am also deaf in only 1 ear which seems to be a drawback.2 different sounds competing. I am happy you are doing so well with yours.

I have the N7 with the Kanso 2 as a backup.
For rehab I mostly stream TED Talk on my desk top computer using the mini mic. I did a little with Cochlear’s Copilot. I also stream TV with the TV streamer when I really want to hear the program clearly.

Hi
When you say your streaming Ted with either the Mini mike or the TV streamer , do you just sit there and listen ,watch the subtitles , or do you talk along with the subtitles .
I have just started my mappings earlier this month and dont seem to be making to much progress . I have the N7 on my left side and user my HA on my right side .
Do you use a HA as well

For the first 8 weeks I just used my CI streaming direct to my ear. After this I used my Resound Enzo 3D aid. I used several apps initially but then moved to my local library app. Then streamed audiobooks direct to my ear.

With every mapping speech became a lot clearer.

If you don’t have a Resound aid which has been properly set up by your CI AuD you won’t stream bilaterally… Just to your CI…

Here’s a copy of some apps that are available. Also you have your local library app… You also have ESL on your computer.

1 Like

Depending on the clarity of the speaker or the program I am watching, I try to listen without looking at the captioning. I still find myself looking at the captions when I don’t catch what is being said.

I have a hearing aid for my right ear but I don’t use it. I find it confusing and distracting. It does not help much since I am eligible for a CI in that ear. I am considering having that side implanted in the near future and may discuss that with my audi at my mapping this morning.

2 Likes

The hearing aid in your better ear can be confusing for some when trying to learn speech in the CI ear.
We each are very different how we learn. Some how you need to figure out what works best for you to learn speech with your CI. I personally watched old TV shows that I knew by heart using the Cochlear TV streamer directly to the CI only while using closed captions. That way I could read and hear while remembering words all at the same time. It worked great for me. You need to figure out what works for you.
Good luck

3 Likes

It’s disappointing that your rehab hasn’t worked out. I’m also deaf in one ear only. Had the cochlear implant done about 8 months after the sudden deafness (SSHL). The two ears sound completely different!

For my rehab, it was zero technology at the beginning. None whatsoever. It was my wife sitting across a table from me, with tables of words from the Adult Cochlear Implant Home-Based Auditory Training Manual Postlingual Hearing Loss available here. My good ear was covered with an earplug & headphone with white noise. Using the cochlear side only and looking away from her lips, I would listen to my wife who would read single words & say it back to her. She would show me the word if I got it wrong, and repeat the sound until I brain trained that word. This took us about 30-60 minutes per day, and we had planned this from before the surgery date by reducing my work hours to allow for this.

At the beginning, I got 0% correct. Gradually, this improved. I still can’t tell the different between M & N sounds in single words, and certain vowel sounds, but they work in a sentence context.

Later, we moved to pitch recognition using a piano. It was hopeless, until I had a hybrid component placed to use the residual low pitch hearing. That was great for a month or so, until that residual hearing suddenly disappeared. Lost all pitch recognition again now, despite further training & trying. Never mind, got the good ear still & as a result of not using the hybrid processor, the device is much simpler to manage.

I started streaming podcasts a couple of months later & it was hard. Now, I’m up to streaming certain podcasts at 150% speed (which I would normally listen to at 200% speed), but other podcasts which use background music behind the speech are at 100% speed or impossible. British, American, Australian, Scottish accents are fine. General African & Middle Eastern accents are very hard for me to understand. Irish accents I can’t even understand using my good ear!

Overall, it’s been a win. Hearing is not perfect; far from it! But it makes hearing easier than not having it, and prepares me in case the other ear ever decides to take a permanent holiday too. I’ve come to accept that I’ll never use the cochlear ear to listen to Rachmaninov again as the nuances of any classical music is impossible. Perhaps some rap might work, if I ever get into that!

8 Likes

Thank you for this bcarp! You found a way to make the training work, and your wife sounds amazing to help you so much!

@bcarp I’m very happy for you that you have been able to make rehab work for you and your CI. Congratulations to you, and your wonderful wife for helping you.

I also ditched my hybrid EAS after I lost more residual at 15 month post CI.

1 Like

I wish I had known about what you had done.It sounds great and that was something I could have done. I’ll try it but it is now almost 2yrs since activation. Good luck

Kathy it’s never to late to start rehab. As someone else said you also have ESL classes on the computer.

It’s been 15 months post activation as my only access to sound, and I think it’s been spectacular for the most part. Things are still evolving with lots of the finer details being filled in.

One of the things that has taken a while to come around is the sound of rain on the roof. For the most part of a year it was a rather unusual oscillating high pitched tinkling that had me wondering what this particular sound was on a number of occasions. But now it sounds pretty much like it used to.

In terms of listening to people, in a more ideal environment it’s great and so easy to do so. It’s fine in noisy environments too, though I do have to be particularly mindful of the clarity bubble - moving a couple of steps closer to the speaker can make all the difference. Listening fatigue isn’t an issue like it was pre-CI.

As far as scores go, the last time I was tested was 6 months in and if I remember correctly it was 82%, but haven’t been tested again since. Not really fussed by scores, but I do pay attention to how I’m really travelling in real-world listening.

I’m curious about this? Always looking for extra things that may help here and there. I do have a bit of an annoyance happening with what the N7 considers to be the threshold for switching from the speech program (which is great, I love it!) to the speech in noise program which just attenuates everything and is not helpful.

In all I’m pretty happy overall with how things are going for me!

4 Likes

Hmm, also wondering here. What is “cafe” program?

1 Like

@bcarp from my understanding some AuD’s give a program/scan where you can hear better in noise and call it cafe. Some do programs/scans as a music program one where you are meant to be able to hear music better. It’s more a US phenomena and some European AuD’s

I asked my AuD in Melbourne about it in the early days she just looked at me blankly she didn’t have a clue. I only have one current scan.


Here is a picture of my Nucleus page. My number three program is the Cafe program. Also note the Forward Focus selections, these help most for me in noisy environments.
It’s not a made up program, it’s a factory Cochlear program.
As far as SCAN, it’s pretty much in all Cochlear programs. It’s simply noise reduction in my understanding. It can be adjusted in many ways, more or less.
Cochlear does not have programs that change automatically like Phonak for example.

2 Likes

Hmm, seems like a factory setting that can be manually added to the processor if you want then. I’ve never been offered it here, but the SCAN seems to cover it anyway.

The SCAN is an automatically changing program. There are a number of scenes it can automatically select from. I believe the noise reduction adjusts depending on the detected environment. I suppose Cafe mode is either a forced implementation of one of the SCAN scenes or perhaps a more extreme version of one of those scenes. I do use Forward Focus a lot, in conjunction with SCAN (which moves to “speech in noise”) in noisy environments.

1 Like


@Raudrive , here’s some pictures of my screen with SCAN activated. The red circled area shows the current automatically detected scene. One shows me listening to my wife in a noisy environment (detected as speech in noise), and another listening to a cello playing (detected as music).

This is a unilateral system, not bilateral. The Music program (2) is one with wind noise reduction and gate control turned off. The two lower programs duplicate the upper programs but with wider electrical stimulation pulse widths to decrease battery usage at the expense of lower sound fidelity, in case I ever need it as has happened in the past.

1 Like

Are you saying your processor will automatically change between your number 1, 2, 3 and 4 programs?
Thanks for the pictures, they are very helpful. Processors recognition of different environments is interesting.

My understanding of these programs are that they are manually selected by you. Each have a purpose for you to use in different environments. When you select one of those 4 programs it stays there until you manually change it. Each of those manual programs reactions differently to environments based on tuning the fitter adjusts the program to do.

If I am mistaken, I am all ears to learning.:ear:

1 Like

Sorry. I was unclear. Program 1 is SCAN, where I spend 99% of my time. Within this program, the N7 processor changes its processing depending on the environment.

Programs 2, 3 & 4 are manually selected. I use program 2 for quite concert music, in environments where SCAN (program 1) would think the music itself is just background noise because it’s too soft.

I haven’t needed to use programs 3 or 4 for several months now, as they are backups for when my implant impedance inexplicably rises once in a while, affecting battery life. These are battery conserving programs, which also lower sound quality.

3 Likes