
AppForce1 Worklog
Bi-Weekly podcast. I'm going to share my journey as an iOS developer in real-time. The wins, the struggles, the lessons learned, and the code that actually works. No fluff, no corporate speak, just honest developer-to-developer conversations.
AppForce1 Worklog
AppForce1 Worklog: When Your Volume Slider Has a Mind of Its Own
Make sure to let me know what you think of this episode.
I completely refactored an audio system for a work app, splitting a single AVAudioEngine into separate engines for recording and playback. This architectural change fixed a bizarre bug where the system volume slider moved unexpectedly during audio operations.
• Split AVAudioEngine into separate recording and playback engines
• Fixed the MP Volume View movement issue by unifying audio session management
• Improved background task management for location tracking services
• Removed dead code and deprecated functionality
• Explored solutions for audio session conflicts, threading issues, and memory leaks
• Implemented dedicated dispatch queues for different audio operations
• Created a robust background task management system for location updates
• Added extensive logging to better understand audio session lifecycles
Looking ahead to SwiftUI integration, audio performance optimization, and iOS 26 compatibility testing. Do iOS 2025 is happening November 11-13 at NEMO Science Museum in Amsterdam - check out do-ios.com for more information.
Do iOS: https://do-ios.com
Rate me on Apple Podcasts.
Send feedback on SpeakPipe
Or contact me:
- Mastodon: https://hachyderm.io/@appforce1
- X: https://x.com/appforce1
- BlueSky: https://bsky.app/profile/appforce1.net
- LinkedIN: https://www.linkedin.com/in/leenarts/
Support my podcast with a monthly subscription, it really helps.
My book: Being a Lead Software Developer
All right, let's get started. Ios Development Worklog, episode 105. Audio Engine Refactoring and Background Task Management. Welcome to the first episode of my iOS Development Worklog. I'm Jeroen Leenarts and this is where I'll share the real work I've been doing, the challenges I've faced and the lessons I've learned. No fake demos, no oversimplified examples, just honest insights from building real iOS applications. Let's get started.
Jeroen:The Week in Review this week was all about audio engineering and background task management, two areas that are notoriously tricky in iOS development. Let me break down what actually shipped and what didn't. What shipped? Audio Engine Refactoring I completely refactored the audio system in the app I'm working on for my job, splitting the single AV audio engine into separate engines for recording and playback. This was a major architectural change that touched multiple components File files changed, 256 insertions, 44 deletions. The refactoring included creating separate recording engines and playback engine. Instancesing a unified audio session helper to manage audio session state. Fixing the MP Volume View movement issue that was driving users crazy. Improving the walkie-talkie service integration with the new audio architecture. Mp Volume View fix. This was actually the most satisfying fix of the week. The system volume slider was moving unexpectedly during audio operations and users were reporting it as a bug. The root cause was multiple audio engines fighting over audio session control. By unifying the audio session management, I eliminated the conflicts. Background task management, improved our location tracking service to properly handle background tasks, preventing iOS from killing our location updates when the app goes to background. This involved implementing proper background task lifecycle management and preventing duplicate background tasks.
Jeroen:And there was also a code cleanup. I removed some dead code and deprecated functionality, including an unused MP4M Plus slider view extension, cleaned up deprecated audio session code code and I removed a few unnecessary main actor annotations that were causing threading issues. Then there's also some stuff that didn't ship. I was working on an audio session optimization. I spent considerable time trying to optimize audio session management, but some of the changes introduced new issues, so I had to roll back parts of that work. Sometimes the best code is the code you don't write. Performance improvements I had some ambitious plans for audio buffer optimization that didn't pan out. The current implementation is actually performing better than my optimized version, which is a good reminder that premature optimization is still the root of all evil.
Jeroen:Swiftui integration I planned to start integrating SwiftUI into the existing UIKit app, but the audio refactoring took priority and consumed most of the week. So the honest assessment this week was frustrating in the best way possible. Audio on iOS is genuinely hard and every simple fix seems to introduce three new problems. But that's exactly why I want to share this with you, because this is what real iOS development looks like. The most challenging part was debugging the MPVolumeView issue. It's one of those bugs that's hard to reproduce consistently, but when it happens it's immediately obvious to users. The fix required understanding the entire audio pipeline and how different components interact with the system audio session. So let's dive in Code, deep dive to audio engine split. Let me walk you through the biggest technical change I tackled this week splitting the single AV audio engine into separate recording and playback engines. The problem the app has a walkie-talkie feature that needs to handle both recording and playback simultaneously.
Jeroen:The original implementation used a single AV audio engine for both operations, which was causing several issues Audio session conflicts. The single engine was fighting with system audio controls. Performance issues. Recording and playback were interfering with each other. Mp volume view movement the system volume slider was moving unexpectedly during operations. Threading issues audio operations were happening on different threads without proper coordination. Memory management. The single engine was creating complex retained cycles. The solution architecture. So here's how I approached this refactoring Instead of having one audio engine trying to do everything, I created two separate engines One specifically for recording and another for playback.
Jeroen:In the AudioManager class I now have two private properties RecordingEngine and PlaybackEngine both instances of AVAudioEngine. This separation allows each engine to be optimized for a specific purpose. For recording, I have properties like IsRec to track data. Is tap installed to know if I have setup the audio tap and bus number, which is always zero for the input bus. For playback, I have a player node, which is an AV audio player node, a mix node for audio mixing and an input converter for handling different audio formats.
Jeroen:The key insight here is that each engine can now be configured independently. The recording engine can be optimized for low latency input, while the playback engine can be optimized for smooth output. This eliminates the conflicts we were seeing before. The key insight was when the BreakTool came, when I realized that AV Audio Engine is designed to be specialized. Each engine should have a single, clear responsibility. By separating recording and playback, each engine could be optimized for its specific use case. But here's the crucial part the engines still need to share the same audio session. This is where the complexity lies and why the MP Volume Fuel was moving unexpectedly.
Jeroen:The implementation details. I will now walk you through how this actually works in practice. For the recording engine, when we start recording, the start recording function first checks if you have already recorded, if you're already recording to prevent duplicate operations. Then it calls setup recording engine, which configures the engine for offline rendering with a 4096 byte frame buffer. That gives us enough good performance without overwhelming the system. And the key part is the install recording tab, which sets up a tab on the input node. This tab captures audio data as it flows through the engine and calls our process recording buffer function each time with each audio buffer. Think of it like installing a microphone at distance on the audio stream. For the playback engine, the set the player function creates an AV audio player node, attaches it to the playback engine and connects. Setupplayer function creates an AVAudioPlayer node, attaches it to the playback engine and connects it to the main mixer node. This creates the audio graph that allows us to play audio. The playback engine is configured for real-time rendering with a smaller 1024 frame buffer, which is perfect for smooth playback without the latency concerns we have with recording. The beautiful thing about the separation is that each engine can be started, stopped and configured independently. We can be recording on one engine while playing back on the other, and they won't interfere with each other.
Jeroen:The debugging process to get to this conclusion this wasn't a smooth implementation because there went a lot of things wrong. And this is how I debugged it. So first of all, there were audio session conflicts. The engines were fighting over audio session control. Second of all, there were some threading issues. Recording and playback were happening on different threads. Then there were some memory leaks. Of course the separate engines were creating retain cycles and of course we had this visual issue with the MP volume view tab moving if we started recording. So the debugging strategy that I used was to use AV audio engines built-in logging to track engine state. I added extensive logging to understand the audio session lifecycle. I used instruments to profile memory uses and identify leaks. I created a test harness to reproduce the MP volume view issue consistently.
Jeroen:So let's dive into this MP volume view mystery. This was the most frustrating part of all the MP volume view. The system volume slider was moving unexpectedly during audio operations. Let me explain a little bit what the MP volume view is. It's basically a simple view that you can put on your screen and that attached itself automatically to the system volume. But depending on what your output channel is, so that's like the like the, the earpiece of the of the iphone, or the speaker of the of the iphone, or a bluetooth headset or connected headset, they all have different volumes. So if you change something that adjusts the channel that the output is on, the mp volume view will follow that and clearly indicate what volume the playback channel that you currently have selected is on. And that was causing the jumping around because we were not being consistent with the output channel that we were choosing. So it took me a few hours actually to be able to explain this to you in a few sentences. So that was a bit of a challenge.
Jeroen:So multiple engines were audio engines were trying to control the audio session. Each engine had settings that were different and were using different audio session properties, and the system was interpreting these changes as user input. The fix was to ensure that only one component manages the audio session state. So the final solution that I came up with was when I had this breakthrough of understanding. So when I unified, the audio session manager called the audio session helper. It is a singleton yes, I know that has both of the engines and uses it to manage a shared audio session. And here's the key insight Instead of each engine trying to configure the audio session independently, they all could go through the central manager. The manager tracks the current state, what category is set, what options are configured, what sample rate is being used and whether the session is active.
Jeroen:The critical part is in the setup walkie-talkie audio session function. Before making any change to the audio session, it checks if the setup is already in progress to prevent race conditions and then it only reconfigures the session if something has actually changed. This is what fixed the MP Volume Slider issue. The problem was that multiple engines were constantly reconfiguring the audio engine even when nothing had changed. Ios was interpreting these unnecessary changes as user input, which caused the volume slider to move. Now the session only gets reconfigured when it actually needs to be and both engines share the same session state. The walkie-talkie option includes things like mixing with other audio, defaulting to speaker and allowing Bluetooth connections All the settings we need for a walkie-talkie app. So the walkie-talkie service integration.
Jeroen:The walkie-talkie service also needed some updates to work with a new audio architecture for Push to Talk. This service acts as the coordinator between the audio manager and the rest of the app. One of the most important changes was creating separate dispatch queues for different audio operations. I have three dedicated queues one for receiving audio data, one for sending audio data and one for playing audio. Each queue uses user-initiated as the quality of service, which gives audio operations priority over other background tasks. This is crucial because audio is very time sensitive. If audio processing gets delayed or interrupted, you get clipping, stuttering or dropped audio. By giving audio operations their own high priority cues, we ensure smooth performance. The services begin audio recording and end audio recording functions are now much simpler. They just call the corresponding methods on the audio manager and update the walkie-talkie state. The complexity is hidden in the audio manager and this is an architectural principle to hide the complexity, which makes the service easier to test and also easier to maintain. And the key insight here is that the audio operations need dedicated cues to prevent clipping and ensure smooth operation. Using user-initiated quality of service ensures that audio operations get priority over background tasks.
Jeroen:Then there was also something that I did, that what I like to call the tool talk background task management. This week I also dove into background task managers for the location tracking that the app does. This is one of those iOS topics that seems simple, but it's actually quite complex, especially when you're dealing with location services that need to run continuously. So the challenge was that our app needs to track location even when it's in the background, but iOS is very aggressive about killing background tasks. The challenge is managing the lifecycle of background tasks properly while ensuring that location updates continue to work reliably. The specific issues we were facing we had some duplicate background tasks, multiple location updates for starting new background tasks without ending previous ones. We had task expiration. Ios was killing our background tasks before location updates were completed, memory leaks, background tasks weren't being properly cleaned up and there was also a performance impact because too many background tasks were impacting the app performance. So the solution that came up is to create a robust background task management system with proper lifecycle management in a class we call the MBLocationService.
Jeroen:The key is having a single background task idea property that tracks whether we have an active background task when we need to do location work. The startBackgroundTask function first checks if there's already a task running. If there is, it skips creating a new one. If we need to start a new task. It calls your application shared beginBackgroundTask, with a descriptive name, location update and an expiration handler. This expiration handler is crucial. It gets called if iOS decides to kill the background task before you're done and it ensures you can actually still clean up everything properly. The endBackgroundT task function is equally important. It checks if we have a valid task ID, calls end background task to tell iOS we're done and resets the task ID to invalid. The update location function ties it all together Start the background task, do the location work, then end the background task. This ensures that location updates can complete even when the app is in the background, but we're not hogging system resources.
Jeroen:So why were we doing this background task work? What was actually happening? We were getting a background push notification with a location update. So that's usually when you enter a geographic region or you exit the geographic region, and one of the things that this implementation was doing was actually ask the system for an exact gps location and then, because of the way this ap works, the processing of the function that gets called is done, because you ask the system to ping the GPS and get you a location, but you get the information through a callback. So what we did was, before we return from this push notification callback, we actually start the background task. Then we end this function, then we end the push notification handling. We have asked the system for a GPS update. We return Then because the background task is active.
Jeroen:The operating system keeps the app in memory. The GPS gets the location and calls back into our application to report the location. Through these GPS callback functions we get the GPS, we process that, then we enter task and then return. What now happens is that the app gets started through one push notification call. A background task is started. This keeps the app alive. Something happens in the background while we already return from the first call. Then another callbacks get called and that actually clears the background task out of memory so that the operating system now knows okay, now it's safe to shut down the app again so we can start a proper shutdown of the application.
Jeroen:So we needed to make sure that we only started a single background task and not create a new background task if there was already a GPS ping running. So the startBackgroundTask function uses a guard statement to check if we already have an active task. If we do, it logs a message and skips creating a new one. This prevents duplicate task creation and avoids a big headache for us. So and then the cleanup. The end background task function is defensive. It checks if we have a valid task ID before trying to end it. It logs the task ID for debugging, calls the system's background task method and resets the internal state.
Jeroen:And, of course, there's the expiration handling. The expiration callback is where the magic happened. If iOS decides a background task has run for too long, it calls this callback. We use a weak self-reference to avoid retained cycles and we ensure cleanup happens even if the task gets killed unexpectedly. So this three-part approach start work and ensures that the background tasks are managed properly throughout the entire lifecycle.
Jeroen:I hope this was an understandable explanation. So the reason this approach works is that we have a single background task that keeps the app alive while the GPS ping is running, and it also prevents the app from being terminated. Of course we are neat citizens on iOS. We want to do proper cleanup when we have the opportunity. This prevents memory leaks and resources being retained for too long. This also happens with the expiration handling. We've added extensive logging because we want to actually, because it's happening in the background.
Jeroen:You can't really see and tell in the app what is happening. So you have to base your conclusions and your observations on what's appearing in log files and, where appropriate, we used weak reference enclosures to prevent retained cycles. So the tool that we used was the UI application background task. The key tool was the UI application shared object and then the begin background task function on that. This gives you a limited amount of time, usually about 30 seconds, to complete a background piece of work. So some best practices here always end background tasks when you're done. Don't start multiple background tasks if you can help it. Handle the expiration callback properly and use descriptive names where possible, because this really aids in debugging. And make sure that you do proper memory management.
Jeroen:So weak references where appropriate and make sure that you clean things up and log background task lifecycle for debugging so that you can actually see what is happening. So this background task management system integrates seamlessly with the location service. When the CL location manager calls did update locations, we extract the most recent locations, start a background task process, the location update and then end the background task. The processLocationUpdate function does three things it updates an internal last known location property. Notifies iDelegate about the new location so it can be sent to the server and it updates the location tracking state server and it updates the location tracking state. This pattern ensures that location updates can complete the work even when the app is backgrounded, but we're not leaving background tasks hanging indefinitely.
Jeroen:There are some performance considerations you need to be aware of. Background tasks have a performance impact, so it's important to minimize background task duration. Keep background tasks as short as possible. You want to try and batch operations, so if you have related operations, group them together. Make sure that you monitor your task count, because you know how many background tasks you start, so you need to make sure that you don't start too many and use an appropriate quality of service if you are using any queues, because that gives you the right priority on the background and gives the system a better chance of giving you some CPU time at an appropriate time. So the result of this is, after implementing this background task management, that the location updates now continue to work reliably in the background, because it used to be that once the GPS callback into our application was finished and we did our fetching of a location on the on the seal location manager, that we we finished our processing, processing, we returned out of the callback function and then the system thought okay, we're done processing, no background task active, kill the application because work is done. Um, so we've. We've been able to have better quality in our location updates in the background by implementing this background task mechanism. We made sure that we didn't have duplicate background tasks and we do proper cleanup to avoid memory leaks. And we had better performance now because the tasks were very efficient and because of the logging it was it was, it was much easier to debug. So lessons learned of all the things that I just mentioned.
Jeroen:So both the audio engine and the gps stuff. Um, audio engineering is hard, so this rig really reinforced that. Audio on ios is genuinely difficult. The combination of audio sessions, av audio engine and system audio controls creates complex interaction that is easy to break. So what I do differently the next time is start with a simpler audio architecture and add complexity gradually instead of like doing it one big move in one go. The single engine approach was actually working fine for the use case, but the refactoring was necessary to fix the m MP volume issues because it was a very visual thing that was annoying users. But technically nothing was really wrong. But we could have approached this issue and avoided this issue probably if we approached it a little bit more incrementally. So the real insight is that audio on iOS is not just about code. It's about understanding how the system works. So the MP volume viewumeView issue taught me that the iOS interprets audio session changes as user input, which is why the volume slider was moving.
Jeroen:And background task that's the second lesson needs very careful management. Background task management in iOS requires discipline. It's easy to forget to end background tasks, which can lead to memory leaks and poor performance, and what I do different next time is create a dedicated background task manager class that handles the lifecycle automatically. This would be a reusable component that tracks multiple named background tasks so that we have some control over that. The manager would have a dictionary mapping task names to their identifiers and it would provide start task and end task methods that take a name parameter. This would make it easy to have multiple background tasks running simultaneously without conflicts. The start task method would check if a task with that name is already running and, if not, it would create a new background task with the provided expiration handler. The end task method would look up the task by name and clean it up appropriately. This approach would allow me for a much easier time creating a scalable and reusable system that works across different parts of the app that need background task management. And then there's also a third lesson Sometimes the best code is the code you don't write.
Jeroen:I spent hours trying to optimize audio buffer management, only to discover that the current implementation was already performing well. This is a good reminder that premature optimization is still the root of all evil, and the real lesson here is that performance optimization should be data-driven. I was optimizing based on assumptions rather than actual performance measurements. The current implementation was already efficient and my optimizations actually made things worse. So what I do differently next time is to measure before I get started, and once I have measurements, do small increments and see if those actually generate an improvement in the processing that we were already having, the quality of processing that we were already having, and you can really use instruments here to profile the actual performance. And then you can optimize based on real data. And there's a fourth lesson here Logging is your best friend.
Jeroen:Audio debugging is nearly impossible without extensive logging. I added logging at every step of the audio pipeline, which made debugging much simpler. So, and a pro tip is to use emojis in your log messages to have to make them easy to spot in the console. So a microphone emoji for recording, a speaker emoji for playback and one of these map pins if you want to log something about locations. And the real insight here is that logging isn't just for debugging, it's also for understanding. By logging the audio session lifecycle, I was able to exactly see when and why the MP volume view was moving. And then there's even a fifth lesson Threading is critical for audio.
Jeroen:Audio operations need dedicated threads to prevent clipping and ensure smooth performance. The walkie-talkie service uses three separate dispatch queues one for receiving audio data, one for sending audio data and one for playing back audio. Each queue has a descriptive label which aids in logging and debugging. That includes our app's bundle identifier and the specific purpose of the queue. They all use user-initiated quality of service which gives audio operations priority over background tasks and concurrent attributes to allow multiple operations to run simultaneously. This separation is crucial because audio is time sensitive, as I already mentioned, If audio is not processed in time or interrupted, you get clipping, stuttering and dropped audio. And by giving each type of audio operation its own high priority queue, we ensure smooth operation. So audio operations are time crucial and if you use user initiated quality of service, that ensures that audio operations get the exact priority that it needs.
Jeroen:And then there's a bonus tip, and that's a bonus lesson, I should say, and that is number six code cleanup is worth the time. Removing dead code and deprecated functionality might seem like busy work. Bonus lesson I should say, and that is number six code cleanup is worth the time. Removing that code and deprecated functionality. Functionality might seem like busy work but it's actually crucial for maintainability. This week I removed an unused mpvue slider view extension, a bunch of deprecated audio session code and some unnecessary add main actor annotations. That code creates confusion and makes debugging harder. By removing unused code, I made the code base cleaner and easier to understand, and user experience matters. That was the whole purpose of diving into this MP Volume View. This is Lesson 7. The MP Volume View was driving users crazy, even though it wasn't technically a bug in the app and users don't care about the technical details, they just want the app to work as expected and a volume slider moving around on its own is not really expected behavior and user experience bugs are just as important as functional bugs, sometimes the most satisfying fixes are the ones that improve user experience, even if they're not technically, technically complex, while in this case, getting it right was actually quite complex.
Jeroen:So, looking ahead a little bit, I'll probably be working on some more audio performance optimization, some more background task monitoring and some error handling to improve things, and I want to get started on the switch ui integration and I want to do some combined reactor refactoring, because there's like a mix of reactive code that uses, delegates and combine and we need to standardize on one approach for better consistency and easier testing. So what I'm really excited about is the SwiftUI integration. I'm planning to start integrating SwiftUI in the existing UIKit app. This will be a gradual migration, of course, starting with new functionality and features, and I'm particularly excited about using SwiftUI for the walkie talkie interface or some new component that we might be adding. I also want to improve the testing infrastructure because, especially for audio input components, audio testing is notoriously difficult, but I think we can create some good test harnesses there.
Jeroen:Do iOS 2025 is happening and I'm really excited about the upcoming event in Amsterdam this November. It's on November 11th and 13th at NEMU Science Museum and it's going to be an incredible opportunity to connect with fellow iOS developers and learn about the latest trends in iOS development. The conference features workshops on topics like building connected devices with embedded Swift, plus two days of inspiring talks from industry leaders. If you're interested in iOS development, I highly recommend checking out doioscom. So that's do-ioscom link in the show notes. It's going to be a great way to stay current with the latest iOS technologies and network with other developers facing similar challenges. So what I'm dreading is iOS 26 compatibility.
Jeroen:Ios 26 is now released and we really need to start to test our audio implementation and see if there's other issues. I did cursory tests and it all seems to work out fine, but there's always this thing that you didn't look at or that you forgot about and that we really need to get right. And I also want to look at memory management in the app. There are some strange things that are noticed and we need to be very careful when dealing with memory management in the app. There are some strange things that are noticed and we need to be very careful when dealing with memory management, and there are some complex retain cycles and we need to ensure that everything is like properly cleaned up. So I really think in the big picture of things this week was very foundational. I've established a solid audio architecture that should serve well going forward, and the next phase is about optimizing and integration.
Jeroen:I want to make sure that the audio system is not just functional, but also that the performance is good and that it stays maintainable. And the most important thing that I learned this week is that audio on iOS is a system-level concern. It's not just about writing code. It's about understanding how the system works and how different components interact. This understanding will be crucial as we continue to build and optimize the audio features. So I want to hear from you about your experience with audio on iOS, of course.
Jeroen:So some questions for you. If you care, use one of the channels that I'm available on to answer those. And audio architecture. That's the first question. How do you architecture audio components in your apps? Apps? Do you use separate engines for recording and playback?
Jeroen:Second of all is background task. What is your approach to background task management? Have you found any patterns that work well for you? Uh, testing audio how do you test audio functionality? What tools and techniques have worked for you? Um, performance, I mentioned instruments. Are there any ways that you test performance, or are there different tools that you use and what specific metrics are you mostly interested about? And then SwiftUI and audio. That's probably something that I need to do at some point as well. Anybody integrate the SwiftUI with audio components and were there any specific challenges that you faced? And another?
Jeroen:Another final question is do iOS conference Are you planning to end to attend do iOS in Amsterdam? And, if you look at the speaker list and their topics, what are the topics that you're most interested about? As always, you can reach me on Twitter at app force one that's app force and then the numeral one on LinkedIn Mastodon, blue Sky, anywhere. I will make sure that those links are in the show notes. Make sure to reach out and you can always use the text. The show feature of my podcast and your input will really help me shape future episodes and directions of this work log. Also, if you're planning to attend DoIOS, I'd really love for you to connect with me there. It's always great to meet fellow iOS developers in person and share experiences, and you can find me at the conference, of course, because I'm organized and I'll be sharing some insights from this work log and audio engineering challenges with anyone who's interested while we're there.
Jeroen:So this week was all about audio engineering and background task management, two areas that are notoriously tricky in iOS development, and this is what we covered. So the key achievements were that the audio engine refactoring was successful. I split the AV audio engine into separate recording and playback engines. The MP volume view issue was fixed, so this thing is not moving around anymore. Unexpectedly. We fixed the background task management, so we implemented proper lifecycle management for location tracking and I did a lot of code cleanup by removing dead code and deprecated functionality. We did a technical deep dive on the audio architecture. I tried to do a detailed walkthrough of the audio engine split, how we managed the audio sessions so a unified audio session helper to prevent conflicts and making sure that we don't set audio session properties if they're already set so that you don't set the same values twice. Background task lifecycle so proper management of iOS background tasks, some threading issues and there was also some lessons learned. In recap, audio on iOS is genuinely difficult and requires system-level understanding. Background tasks need careful lifecycle management. Sometimes the best code is the code that you don't write.
Jeroen:Logging is essential for audio debugging. Threading is critical for audio performance. Code cleanup is worth the time. User experience. Bugs are just as important as functional bugs and, looking ahead, I'm going to be working on audio performance optimization, swiftui integration, combined refactorings, ios 26, fixed compatibility testing and verification, some legacy code cleanup, because we're probably going to deprecate the support for older iOS versions because we're now supporting back to iOS 15. I'm aiming for at at a minimum like iOS 17. And I'm hoping that this episode demonstrated what real iOS development looks like the challenges, the debugging process and the satisfaction of solving complex problems. And next time we'll dive into the things that I'll be working on this week. Keep building amazing iOS apps, and that's it for this week's Work Log, and I'll keep you on this week. Keep building amazing iOS apps, and that's it for this week's work log, and I'll keep you posted on any developments and make sure to check the links in the show notes.