Writing for the User Experience: The Three “E”s of Technical Writing

As technical writers, we know that documentation is vital to the user experience but, the best technical writers know that the key to a great UX is to include the three “E”s: expectations, engagement, and empowerment. By keeping these three elements top-of-mind, technical writers can produce documentation that exceeds user expectations, engages them on a personal level, and empowers them to be successful. Let’s take a closer look at each of these three “E”s.

  1. Expectations

The first “E” stands for expectations. It’s important to set the right expectations for your users from the very beginning. For example, if you’re writing documentation for a complex piece of software, it’s important to let the user know that upfront. Otherwise, they may get frustrated when they encounter difficulty using the software and think it’s due to a lack of understanding on their part.

Your users will have expectations too. They expect the document to be clear, concise, and free of errors. They also expect it to be easy to navigate and understand. If your document falls short in any of these areas, users will likely become frustrated and give up on trying to use it.

2. Engagement

The second “E” stands for engagement. In order to keep your users engaged with your documentation, you need to write in a clear and concise manner. Using plenty of headings and subheadings to break up the text will make it easier to scan and don’t forget to include plenty of examples and screenshots to illustrate key points.

In addition, it is also important to engage users on a personal level. One way to do this is by using case studies or real-world examples whenever possible After all, people are more likely to use something if they feel a personal connection to it. When writing your documentation, be sure to use a tone and style that is approachable and relatable. Write like you would speak— without using jargon or overly technical language.

3. Empowerment

The third “E” stands for empowerment. Your goal should be to empower your users with the knowledge they need to be successful. Give them the information they need to complete their tasks efficiently and effectively. Anticipate their questions and concerns ahead of time so that you can address them before they even have a chance to ask In addition, don’t forget to include links to additional resources where users can go for more help if they need it. By empowering your users, you’ll create advocates for your product or service—and for your company as a whole.

The next time you sit down to write some documentation, keep the UX in mind by including the three “E”s: expectations, engagement, and empowerment . . . your users will thank you for it!

Tales from the Orchard: Hear Steve Jobs nail the future of mobile a decade ago

An audio recording of an interview with the former Apple CEO comes to light.

By Marrian Zhou of CNet

“The phone of the future will be differentiated by software.” A decade later, in the era of iOS and Android, that prediction by Steve Jobs has come true.

Jointly published Wednesday by The Information and The Wall Street Journal, an audio interview from 2008 reveals the Apple CEO’s thoughts on the future of mobile phones when Apple’s App Store was barely a month old.

“I think there are a lot of people, and I’m one of them, who believe that mobile’s going to get quite serious,” Jobs told reporter Nick Wingfield, then at the Journal and now at The Information. “They can be mighty useful and we’re just at the tip of that. That’s going to be huge, I think.”

The App Store turned 10 this year on July 10, and it’s evident that our lives are vastly different from 2008. Today, 500 million people from 155 countries visit the App Store every week, choosing from more than 2 million apps available for download, according to Statista.

The Apple co-founder, who passed away in October 2011, also got it right when it comes to mobile games.

“You’ve got everything from games to medical software to business analytics software to all sorts of stuff on it,” Jobs said in the 2008 interview, “but games is the single biggest category … I actually think the iPhone and the iPod touch may emerge as really viable devices in this mobile gaming market this holiday season.”

Today, the games category of apps available on the App Store tops the platform with a 25 percent market share, according to Statista. The second largest category is business apps, with a 10 percent market share.

Apple didn’t immediately respond to a request for comment.

You can listen to the full interview at The Information or The Wall Street Journal.

Tales form the Orchard: Apple may release new iPhone colors this year, including red, blue, and orange

 

 

 

  • Apple is expected to release three new iPhone models this fall.
  • The least expensive model could come in a variety of colors, including blue, red, and orange, according to an analyst.

By Kid Leswing of Business Insider

Apple could release an iPhone later this year with gray, white, blue, red, and orange color options.

The colorful new phone would be a less expensive model that has facial recognition and an edge-to-edge screen, according to details from the TF International Securities analyst Ming-Chi Kuo shared by 9to5Mac.

The iPhone 8 is available in silver, black, gold, and red. The iPhone X comes in silver and black.

Apple watchers are expecting three new iPhones this year: one that looks like the iPhone X but with updated components; a supersize version of the iPhone X; and a less expensive, colorful iPhone with an edge-to-edge LCD screen and facial recognition that costs $650 to $750.

The supersize iPhone X could come in another new color — gold — according to the note shared by 9to5Mac.

Apple most recently released a lower-cost iPhone model with the iPhone 5c in 2013. That too came in a variety of colors, but it was not as strong a seller as Apple had hoped, and the model was discontinued soon after its release.

Kuo is a well-regarded analyst who often reveals new details about Apple’s production plans before they are public. While he was at another bank last November, he shared this graphic with his prediction about the 2018 iPhone lineup:

 

How do you feel about the potential colors for the new iPhone lineup? Sound off in the comments below!

Tales from the Orchard: How Apple Killed Innovation

 

 

 

By Simon Rockman of Forbes

Mobile World Congress 2018 was strange. All the innovation was in the network side, handsets have become boring. While those touting 5G were talking about network slicing, full duplex radio, millimetre waves and massive MIMO, the handset folks seemed to think a better camera, smaller bezels and painful emojis were in some way special.

Phones were not always like that. Back before Barcelona it was the 3GSM, which those of us on What Mobile Magazine called the Cannes Phone Festival. Each handset manufacturer had something new and exciting. Maybe it was the 8810, Razr or P800, all fabulous innovative phones. Sometimes it was the N-gage, V.box or Serenata. At least they tried.

But somehow there is the Orwellian myth that Apple invented the Smartphone. Indeed there was a recent BBC radio documentary charting the need to de-tox from smartphones which said ‘now the country which invented the smartphone is working on the cure’. I did a triple-take. Was the BBC saying that Apple invented the smartphone? It was, so Radio 4 was wrong, but America did invent the smartphone, it’s just that the SIMON was an IBM invention. So the BBC was right, but didn’t know it. SIMON was the first ever smartphone, with predictive text and a touch screen 13 years before Steve Jobs sprinkled marketing fairy dust over an overpriced 2G phone with severe signalling problems, broken Bluetooth and the inability to send a picture message.

Nothing in the iPhone was something which hadn’t been seen before, it’s often seen as the flagbearer for the devices we have in our pockets, and maybe it was: Apple showed that marketing was more important than technology. The mobile phone industry is suffering the consequences, not only has Apple sucked all the revenue out of the rest of the industry it imposes huge technical challenges by ignoring standards.

I work at a mobile network which doesn’t sell Apple products and yet we’ve had to spend a huge amount of time and money making sure that our customers don’t get corrupted messages when they are sent from an iPhone.

We went from a world of bars, flips, clams, sliders and rotators each with a design language where you could spot the manufacturer from styling cues to a world of two designs. Phones that looked like an iPhone and phones that looked like a Blackberry. Now all phones are just black rectangles.

Apple charges operators through the nose. It’s taken all the portal revenue and now no-one makes any money out of devices so there is no fundamental research done. It all comes down to what Qualcomm and MediaTek tell the manufacturers to make. Testing phones is hard, very, very hard and it’s about to become many times more difficult with 5G where the complexities of mmWave testing mean you can’t use cables and all testing has to be done over the air in an expensive-to-rent anechoic chamber.

So everyone plays it safe.

 

It’s not like the ideas are not out there. Plucky Brit start-up Planet Computers has built the Gemini  , The gestating Monohm Runcible , and there are some amazing concepts like the Arcphone which is inspired by the Motorola Razr.

Just googling ‘concept phone’ will bring up a swathe of ideas.

But there is hope. Not just in the form of small companies doing interesting things. Indeed not even in that hope. Planet are very unusual in shipping a product the vast majority fall by the wayside.

The hope comes in the death of the smartphone. You see smart is escaping. It will no longer reside in the one device you stare at but become omnipresent. One trend at Mobile World Congress was increased virtual assistants. Samsung will be there with a dedicated Bixby device, T-Mobile wants you to shout “OK Magenta”, and Telefonica has announced commercialisation of its product. All this follows on the heels of Alexa and her friends. And people like the devices.

As your cooker, television and bath all become smart there is less need for the single smartphone.  What is the phone today will become something else and we’ll go back to the time when phones were used for voice.

Such generational changes are normal, when I started in the analogue mobile phone industry the dominant, unassailable handset manufacturers were Motorola and NEC. It became Nokia, Ericsson and Motorola. The belief that the status quo of today with a dominant Samsung and Apple is to fail to remember the future. Peak Apple? Maybe not yet, but we are well past peak innovation and the disruption can not come soon enough.

Tales from the Orchard: Apple Needs to Make Siri Great at Something.

 

 

By JHROGERSII of iPad Insight.com

With the HomePod showing up on my doorstep next Friday, I’ve been doing some thinking about Siri lately. Why is the overall impression of Apple’s digital assistant so negative? There are recent surveys and tests showing it as being competitive with Alexa, Google Assistant, and Cortana in some areas. There is real evidence that many “normal” users aren’t as dissatisfied with it as we in the tech community and the “Apple bubble” are. So what is the problem? Where is the disconnect?

Consistency is Key

I think the problem with the general perception of Siri is twofold. First, I have been begging for Apple to unify Siri across its platforms and make its feature set consistent from device to device. Unfortunately, not only has that not happened, but now we have yet another unique Siri implementation on the way that will be specific to the HomePod.

Users shouldn’t have to remember that Siri on Apple TV can only handle media requests and HomeKit, or that Siri on the Mac can save a list of previous responses, but can’t talk to HomeKit devices. Why can’t we get the saved Siri results from the Mac at least on the iPad? Now we have an intelligent speaker that won’t work for a lot of common Siri queries that we can perform on the iPhone we will use to set it up. Why Apple? Why? None of this makes any sense at all. All it takes is Siri not coming through or confusing a user a few times for them to give up on it and move on.

One positive is that I’m certainly not the only one talking about this. I was very happy to hear Rene Ritchie of iMore also discussing making Siri consistent across all Apple platforms during Monday’s Vector podcast. He was also advocating for Apple to make Siri a cloud-based service that works across all devices, which would also be a very welcomed addition. This could still be done while maintaining users’ privacy, so Apple shouldn’t try to hide behind that excuse anymore.

While many of us have been asking about this for a while now, the fact is that Mr Ritchie has eyes and ears inside of Apple and may actually be able to exert some influence on the situation. If he is brining it up, at least it is likely to be heard within the glass walls of Apple Park. I mean, the guy was able to get an Instagram pic with Tim Cook at a hockey game, right? That’s a lot closer than most of us will ever get.

Make Siri Great…For the First Time

Even as an Apple fan, I have no problem admitting that Siri has NEVER been great at anything. I, like most people, gave it a pass at release because it was new and different. However, when Apple didn’t improve it or truly move it forward after several years, most people lost their patience with it. I have still use it often for basic tasks, such as reading messages, creating alarms, and placing phone calls. However, we are a long way down the road from those tasks being impressive.

In my opinion, for all of the things Siri does, the biggest problem is that Apple never focused in and made it great at any of them. Some of its features, such as entering or reading off appointments or reminders, or setting timers, are very good and pretty consistent. The ability to ask Siri to remind me about a phone call, email, voicemail, or web page that is on the screen is also very useful (for those who know the feature exists).

However, I wouldn’t qualify any of the above features as “great,” because there are still times when they break down. For example, Siri will just stop recognizing the “Remind me about this” command on occasion, and ask me what I want to be reminded about. When this happens, I have to reboot my iPhone to get the feature back online. That just makes me shake my head, because this is a really useful feature that I take advantage of often. It is two years old now, so this really shouldn’t be happening anymore.

Unfortunately, these features are still the best that Apple has to offer with Siri, and they still have glaring issues. Then you get into the real problem areas. Dictation still comes and goes and struggles mightily with proper names and context. Asking Siri questions often just results in a web search that will quickly disappear from the screen. Trying to use context between actions will sometimes work and sometimes just break down. Combine the failures with the lack of consistency and shortage of and restrictions on third party integrations and you have too many pitfalls for users to fall into.

What is the difference?

So what’s the real difference between Apple on the one hand, and Google and Amazon on the other? Both of their assistants have legitimate issues and shortcomings, as well. Google doesn’t play much better with third parties than Apple, and in some cases, Assistant is actually harder for them to work with (although this year’s CES shows that Google is addressing this). As for Alexa, just try using it on a smartphone or other non-Amazon hardware. Amazon has the same issues as Apple with sub-par mics that aren’t set up to be used with a voice assistant.

While Amazon has given third party developers an open door, Alexa doesn’t allow for any contextual awareness with its “Skills.” Users have to memorize set commands and queries, and if they forget, their requests don’t work. I have heard Echo users who are otherwise very happy with Alexa curse it over this shortcoming. Even the most favored voice assistant of the moment has its issues if you get past the hype.

So, both of Apple’s primary competitors in voice assistants have legitimate shortcomings that users are very aware of. Why do they get a pass on them while Apple doesn’t? It is because both Assistant and Alexa are legitimately great at one or more things that users find very useful. If you ask Google Assistant questions, it will give you direct correct answers very quickly. It will translate on the fly. It will search, recognize and digitize written text. Oh, and it has a very similar feature set across the board where it is available. Google handles this better than any other assistant by far, and frankly, no one else is even close right now.

As for Amazon, they doubled down on making the basics near perfect. The Echo devices have multiple beam-forming mics that do an impressive job of picking up your voice and accurately parsing your requests, even in the presence of background noise. The Alexa experience may have a steep drop-off on third party hardware, but most people are using it on Amazon’s because of how inexpensive and easily available they have made it. Their system’s combined ease of use has made people comfortable using voice assistants. And again, like Google Assistant, Amazon’s Alexa feature set is very consistent, no matter what device you are using it on.

Along that same line, another key for Amazon (that Google wisely copied ahead of Apple) is that they made a device that put the voice assistant in a different context. Many people are still self-conscious about using Siri and other assistants in public, especially when using a headset or AirPods. While this has become more commonplace over the last decade, it can still look pretty odd watching someone “talk to themselves” while walking down the street. There are a lot of people who are too self-conscious to do that.

The beauty of the Echo is that it takes the voice assistant and makes it available throughout a room. You don’t have to carry a phone around and be subject to the limitations of its mics. “Hey Siri” works, but it is locked to a device that is meant to be with you, not across the room. The Watch is great if you have one, but it isn’t capable of making all of the same voice responses to your queries yet. The Echo took the genie out of the bottle by making a device that is dedicated to monitoring an entire space, and it is clear that users prefer this experience. Alexa was also set up in such a way as to not make users feel less self-conscious about using it in the open. They are having a conversation with a device that responds aloud, so the experience is natural and more “human.”

Another strength of Amazon’s Alexa is the third party ecosystem that has sprung up around it. While I mentioned the limitations of Alexa Skills as being a drawback, the fact that they exist is still a big strength. HomeKit may have been there first, but people have embraced Alexa because there is convenience in being able to link devices that they want to use together without headaches and restrictions. While the defined commands required to use Alexa Skills may cause some frustration, the amount of third party integrations available is still a strength that Amazon has over both Google and Apple.

Getting a pass

The bottom line is, Google’s Assistant and Amazon’s Alexa do get a pass on their shortcomings, but they get it for legitimate reasons. People don’t get as irritated over them because both of these assistants have aspects that are truly great. On the flip-side, Apple doesn’t get a pass for Siri’s shortcomings because there isn’t a similar feature that it has or task it performs that is similarly great. There is no positive bubble or reality distortion field here. Without that, people will pile on the negative aspects and won’t give much credit for the things that are good.

Every time I hear Siri discussed on a tech podcast, even an Apple-centric podcast, this is what it comes down to. There are complaints and the typical, “Siri sucks” comments. Then someone will usually mention a feature or two that is good and works well for them, and people will backpedal a bit and agree. Then there is usually a more reasonable discussion about all the things that don’t work as well. I hear the exact same in reverse with discussion on Assistant and Alexa, with the overall impression being positive. However, you will often hear the same backpedaling and admissions that certain features of those assistants don’t work so well. These overall positive and negative impressions come down to doing a few things very well, and the reactions around the three assistants are remarkably consistent across the tech world because of this.

We just heard a rumor this week that Apple is scaling back the planned features in iOS 12 to focus on software stability. I can only hope that Siri will be one of the items that will be focused on over the course of this year as part of this. The fact that Craig Federighi was supposedly behind this move and that Siri is now under his jurisdiction is cause for some optimism that improvements will be made going forward into 2018. Even if Apple won’t say it, the moves they have made to bolster their AI and machine learning efforts over the last two years, as well their downplaying of Siri as an intelligent assistant in the first HomePod, show me that they see the problems. However, the question remains- do they have the right answers to fix them?

If Apple can create a more consistent user experience for Siri across all of its platforms, it will help cut down on frustration and might actually encourage more Apple device owners to use it. However, to turn around the service’s tarnished reputation and get it seen in a favorable light, Apple needs to double down on one or two core features that they know users want to be improved. They need to taken them, hammer everything out and make them great, whatever that takes.

I’m talking bulletproof. Rock solid. The kind of great that no reviewer can deny. That is what it will take to turn heads at this point, so that’s what they have to do.

The current path of incremental upgrades and new feature additions isn’t improving the situation or user’s impressions of Siri. Apple needs something that its competitors already have. They need something great to hang Siri’s hat on going forward. Without this, the negative perception won’t change, even if Siri does improve incrementally over time.

What would you add to Siri’s feature list? Sound off in the comments below!

App of the Week – Tripit

 

 

By Jeff Richardson of iPhone J.D.

 

Review: TripIt Pro — notification of travel delays and cancellations, and other travel assistance

 

I’ve been using the free TripIt service for many years. I reviewed TripIt back in 2013, and while the service and the app have improved since then, the basic idea is the same. When you make a travel reservation and receive the email from the airline, hotel, rental car agency, train, etc., you simply forward that email to TripIt. The service recognizes you from your email address, reads and understands the content of those emails, and prepares an online itinerary for your trip. With the free TripIt app on your iPhone (or iPad), all of your travel info is in one place. Thus, if you are in the middle of your trip and need to find the name or address of your hotel, or a reservation number, everything is in one place in the TripIt iPhone app. It is a like a virtual travel agent which provides all of the core basic features. I love the service and recommend it to everyone.

 

TripIt Pro costs $49 a year, and it adds premium services to look out for you before and during your travel, much like a more sophisticated travel agent might do. The company gave me a free demonstration account earlier this year so that I could try it out, and I’ve used the service in connection with several trips over the Summer, Fall and Winter of 2016. I enjoyed the service, and I think that it is worth it for any frequent traveler. Here are the key features of the service.

Alerts

 

TripIt Pro constantly monitors your travel reservations, and if anything changes, you are notified immediately. The value of this service to you will of course depend upon whether anything goes wrong during your travel. If something does go wrong, TripIt Pro is incredibly valuable and the service can pay for itself with just one alert.
In June of 2016, this feature was incredibly valuable for me. I was traveling to Miami along with many other attorneys at my law firm, and I was on an early morning flight. When I woke up, I saw an email from TripIt Pro alerting me that my direct flight had been cancelled.

The email gave me a link to get a list of alternative flights, and included phone numbers for the airline to make changes.  Even though the airline itself never sent me a notification of the cancellation, TripIt Pro gave me the information that I needed to call and book an alternative flight.  The alternative flight was inconvenient — to go from New Orleans to Miami, I had to first fly to Dallas — but at least I was able to (barely) make my meeting in South Florida later on that day.  Many of my partners didn’t find out about the cancellation until they got to the airport, at which time many of the alternative flights were already taken, and some of them missed the meeting entirely.

TripIt Pro gives you other flight alerts as well.  It tells you when it is time to check in — something that most airlines tell you too, but the TripIt Pro email usually arrived before the airline one did, if that makes a difference to you. 

 

Flight delays and cancellations happen far more often than any of us would like. But with immediate notification of any problems, at least you can be one of the first in line to make alternative arrangements.

Connection Summary

Because I don’t live in a city with a major hub airport, a large number of my flights involve connections through cities like Atlanta. When I land, I want to know information such as the time of my next flight, the gate at which I will be landing, and the gate out of which my next flight will leave. Of course virtually every airline has its own app or website that you can manually access to load all of this information, but sometimes those apps are slow to use. TripIt Pro sends you an email immediately upon landing on your first flight with all of the information that you need to make your connection, including gate information and whether the next flight is on time.

 

I found it very convenient to have this connection information pushed directly to me so that I didn’t’t have to do any extra work to find the key information that I needed.

Seat Tracker

I’ve been lucky enough for the past few months to get a good seat at the time that I booked my flight. If you are not as lucky, TripIt Pro includes a Seat Tracker service. Tell the service what kind of seat you are looking for (exit row, aisle, window, specific cabin, front of the plane, etc.) and TripIt Pro will notify you when that seat becomes available. You’ll have to contact your airline to make the change, but at least you will know when it is the right time to do so.

Etc.

TripIt Pro offers other features that didn’t appeal to me, but maybe they would appeal to you. A Point Tracker service lets you track your travel points in one spot. (I find it more useful to just manage this through each specific airline, hotel, train, etc. service.) A flight refund service alerts you if a cheaper flight becomes available and you are ever eligible for a refund. (Does this ever really happen for anyone?) A sharing feature let’s you share travel information with others. (Even with the free TripIt service, I just use the TripIt website “print” my travel itinerary to a PDF file and then I share that PDF file with others, without using the Pro sharing features.) And there are some discounts for other travel services if you use TripIt Pro.

Conclusion

It is nice that TripIt Pro offers additional features, but I think for most people the question is whether it is worth $50 a year to you to get immediate notification of delays or cancellation in your travel plans. If you travel often, and mentally divide up that $50 price among each of your different flights, then I suspect many frequent fliers would consider this a bargain. Even just one cancellation can cause a lot of distress for you, and with an immediate alert at least you can start working on a solution to the problem ASAP. The other TripIt Pro features are not in themselves worth $50 to me, but they are nice bonuses that increase the overall value.

Everyone who travels should check out the free TripIt app. If you are a frequent traveler, I encourage you to consider adding the TripIt Pro service.

Click here to get TripIt (free) – iOS
Click here to get Tripit (free) – Android

Do you have a favorite travel app? Tell us about it in the comments about it in the comments below!

App of the Week: OmniGraffle

 

A refresh of the long-time Mac drawing app from the Omni Group now pulls in images and text from other apps.

By Mike Wuerthele and William Gallagher of Apple Insider

Like its fellow Omni Group apps OmniFocus and OmniPlan, the drawing and charting software OmniGraffle 3.2 has been updated for iOS 11. All three now take advantage of the new operating system’s drag and drop features to change and improve how you work with the apps.

If you’re an AppleInsider reader, you’re already aware that The Omni Group’s software dates back to the dawn of the PowerPC era. More than 20 years later, the company is still updating its suite of software, with OmniGraffle getting a new iOS version for iOS 11.

It’s a drawing application but not for art or sketching. Rather, it’s for making illustrations specifically to explain things. So OmniGraffle is often used for organization charts or for floor plans. You can get very elaborate and detailed, so much so that app designers can mock up in OmniGraffle how their software will look.

OmniGraffle is also meant for just explaining things quickly so it has tools and features to make drawing fast. It’s also got an extremely dedicated following among its users who share and sell collections of templates called Stencils.

If you’ve used MacDraw II, or LucidChart, you’ve got a pretty good handle on what OmniGraffle can do for you. What it can do for you now with iOS 11 is speed up how you can compile a drawing from other people’s Stencils or your own previous documents.

 

This is done by iOS 11’s drag and drop. It’s the same new drag and drop that has been added to the OmniFocus To Do app where it’s made a significant improvement. It’s the same feature that’s been added to OmniPlan and fixed an issue there that’s been dogging that project management software from the start.

Drag and drop doesn’t make as big a change to OmniGraffle, though. It’s a nice addition and one that when you’ve tried it, you won’t want to go back yet it doesn’t dramatically transform the app.

There are three aspects to how OmniGraffle exploits this new feature. You can now drag items in to your drawing, for instance, and you can drag elements between your drawings. Say you’ve got a floor plan for your house and are now doing one for your office: that sofa shape you spent ages drawing would work fine as a couch in the office plan so you just drag it over.

Similarly, if you’re planning out a bigger office with lots of cubicles then you can just draw one and duplicate it.

In theory you can also drag cubicles or pot plants in your drawings out of OmniGraffle and into other apps but currently that’s limited by how many other apps support this feature. This has long been an issue with OmniGraffle and really all such drawing apps like Lucidchart and Microsoft Visio: the way they play with other apps. You can get drawings from any of them into the rest but typically with some difficulty and actually OmniGraffle’s drag and drop may ultimately improve that. Once other apps are also updated to accept dragged and dropped items.

These most common uses for OmniGraffle —the floor plans, charts and app design —all tend to be jobs where you will reuse elements over and over again. So while everyone will be different, the odds are that you’re most likely to drag elements from one OmniGraffle drawing to another and we can see you building up a library of often-used elements.

Dragging these around is quick and handy, but only once you know how. You could spend the next week stabbing wildly at buttons and options without discovering how to drag an item across multiple documents. That’s really an aspect of iOS 11, however: OmniGraffle uses the same multi-finger approach that the system does.

 

Press and hold on an item you want to drag and then with a different finger, tap at the button to take you out of the current OmniGraffle document. That’s a Library icon which needs finding: rather than to the top left of the screen, OmniGraffle places it in the middle and just to the left the document title.

When you’re back in the Document Picker, as the Omni Group calls it, you can tap to open any other drawing. So long as you’re still holding that element you’ve dragged from the first document, you can now drop it anywhere in the new.

Once it’s in that new drawing, though, you can use exactly the same technique to drag it between different layers of the document.

We keep saying that you’re dragging elements of a drawing around but those elements can be text as well as shapes or re-used templates. You can drag text in from OmniFocus or OmniPlan, for instance. That’s not going to save you a lot of time unless you’re dragging a lot of text but it could be a way to make sure you’re consistent across many documents.

It’s the same process for dragging text or graphics out of OmniGraffle into other apps. We had most success doing it with the app’s stablemates OmniPlan and OmniFocus but even that success was limited.

When we drag to OmniPlan, any text in the item we’re dragging goes into that project management app’s list of tasks and a bar appears representing it in the Gantt chart. When we dragged the same item into OmniFocus, it was entered as a new task called “PDF document.pdf” with an attachment of that name which has the graphic item in it.

You’re not going to do that. Maybe you’d drag the elements from an org chart over to OmniPlan so that you had every member of staff listed but that’s a stretch. Project plans tend to start with what needs to be done rather than who you’ve got to give work to. So really the dragging out of OmniGraffle won’t become hugely useful until other drawing apps adopt iOS 11’s new features too.

OmniGraffle aims to be a complete drawing package. It also aims to make it quick for you to create detailed and technical drawings. So the ability to quickly re-use elements fits in perfectly with that.

It’s not the kind of update that you go wow at or that you know you will rush to use. What is, though, is the kind of update you’ll become so accustomed to that previous versions will seem slow. OmniGraffle is all about making clear, professional drawings with speed and without fuss, however. So this is an update that makes good use of the new iOS 11 features.

OmniGraffle 3.2 for iOS has a free trial version on the App Store and then costs $49.99 for the Standard version. A Pro version is a further $4.99 upgrade or you can go straight from the trial to Pro for $99.99.

 

Do you have a favorite technical drawing rule? Tell us about it in the comments below!

How to: record your iPhone screen in iOS 11

 

BY ABHISHEK KURVE of Cult of Mac

Recording your iPhone screen used to be a hassle. If you wanted to capture iOS gameplay, or make a funny or informative GIF of on-screen action, you needed to download a third-party app or connect your device to a computer.

Those days are over: With iOS 11, Apple baked in sweet functionality that lets you record your iPhone screen effortlessly. Here’s how to do it.

How to record iPhone screen natively

 

As you might know, iOS 11 lets you add and organize toggles in the Control Center. In iOS 11, which Apple released Tuesday September 19th, you’ll find the capability to record your iPhone screen is present as a Control Center option.
To use it, open Settings > Control Center and add Screen Recording using the + button.

Now whenever you need to start recording your iPhone screen, just swipe up from the bottom to open Control Center and tap on the “record” toggle, which should look something like this:

The toggle should turn red, indicating that the screen is being recorded. There’s also a persistent notification bar that shows the duration of the recording.

To end the screen capture, just bring up the Control Center again and turn off the recording by tapping on the same toggle.

Once you’ve finished, you can access your iOS screen recording from inside the Photos app. You can also trim the video to adjust its length.

What do you think of this new feature? Tell us in the comments below!

App of the Week – Things 3

Things 3 task manager launches with beautiful new design and all-new features

By Zac Hall of 9to5Mac

Cultured Code is launching all new versions of its Things task management software for iPhone, iPad, Apple Watch, and Mac. Things 3 includes a beautiful new design with charming interactions across each version and powerful new features for organizing tasks and scheduling assignments.

Cultured Code highlights several tent pole changes in the new version including a totally redesigned interface and new interactions across each version, a new Today and This Evening feature for planning your day, support for headings and checklists on entries, time-based reminders for the first time, and both slim-mode and multiple window support on the Mac.

There’s even what Cultured Code calls the Magic Plus Button which lets you intuitively insert created tasks inline with your existing task lists in a very realistic way. Cultured Code also highlights desktop class list editing from iOS with the ability to manipulate and sort text entries as if they were physical objects. Check out the video at the bottom to see it all in action.

HERE’S HOW THINGS WORKS
If you’re new to Things, this is the basic workflow:

1. Collect Your Thoughts Get things off your mind quickly with Things’ action extension – it lets you create to-dos from other apps. Or just talk to Siri on any device! “Remind me to…”

2. Get Organized Create a project for each of your goals, then add the steps to reach them. For clarity, add structure with headings. Then group your projects by areas of responsibility, such as “Family”, “Work”, or “Health”. Review these regularly to stay on top of things.

3. Plan Your Time See your calendar events alongside your to-dos and plan your time effectively. Create repeating to-dos for things you do every few days, weeks, or months – Things will remind you on the right day.

4. Make the Most of Your Day Every morning, grab a coffee and prepare your list for “Today”: review previously planned to-dos and make quick decisions on what to tackle. Pick some more steps from your projects and then get going. The Today list is the only place you’ll need to look for the rest of the day.

5. Customize Your Workflow Use tags to categorize your to-dos or add context. For example, tag places like “Office” or “Home”, or tag all your “Errands”, or everything you’re working on with “Kate”. You can easily find everything you’ve tagged via filtering or search.

Things 3 is the first paid update to the task manager since Things 2 launched in 2012 and carries the same price of $49.99 for Mac (free trial at culturedcode.com/things), $19.99 for iPad, and $9.99 for iPhone + Apple Watch for all customers. To mark the launch and help existing customers upgrade for less, Cultured Code is discounting Things 3 for each platform by 20% through May 25.

If you’re looking for a powerful task manager with fine-tuned design, Things 3 is an easy recommendation. As a Things 2 customer for years, I’ve used the platform as a Reminders and Notes upgrade (and Reminders integration works with Siri) and I love the new look, interactions, and features of Things 3.

 

Check out Things 3 in action below:

Download Things 3 foriPhone & Apple Watch
Download Things 3 for iPad
Download Things 3 for Mac

What is your favorite Task Management App? Let us know in the comments below!

How to: use Apple Clips, the iOS video-editing app and why you’d want to.

 

Apple Clips is like iMovie meets Snapchat.

 

By Caitlin McGarry of Macworld

It’s easy to compare Apple’s new iOS app, Clips, to video-sharing social networks like Snapchat, Instagram, and Facebook. But that’s not exactly fair, because Apple’s Clips isn’t social at all—it’s designed simply to help you create and edit fun videos. What you do with them after that is up to you.

This approach makes Clips less anxiety-inducing. To share a video on Instagram or Snapchat, you’ll want to shoot in Instagram or Snapchat to make sure the moment you’re capturing is perfectly framed. You can shoot in Clips, too, but this feels more like an app you’ll use after the moment has passed to stitch together memories and add a soundtrack and captions. Clips is way more low-key.

But that doesn’t make it less complicated to use. In fact, it’s on par with Snapchat when it comes to unintuitive design, so be prepared to spend an exorbitant amount of time creating your first clip. (We pray it gets easier the more you use it, but time will tell.) Here’s everything you need to know about using Apple Clips.

How to use Clips

 

First thing’s first: You need photos and videos to edit, right? Right. You can import them from your Camera Roll and stitch them together, or you can shoot photos or videos in-app. Pro tip: You can swipe left on the giant red “Hold to record” button if you plan on filming for awhile to lock the camera in recording mode. Just tap the button again when you want to stop shooting. Clips defaults to the Instagram-esque square format, so if you’re importing media, make sure it’ll look good square. (Some might mind this, but I don’t.)

From there you can swap videos or photos around in the visual timeline at the bottom of the app just by pressing and moving them. You can also easily trim video clips—just tap on the clip in the timeline and then tap the scissor icon to edit the video down to just the seconds (or minutes) you want to include.

Along with a video-trimming tool, Clips has all the standard social video-editing features (filters, emojis, etc.) tucked behind icons at the top of the app. Tapping the speech bubble icon adds captions in real time (more on this in a minute). Eight filters, ranging from black and white to my favorite Comic Book, are behind the interlocking circles icon. The star is hiding the time, your location, shapes like circles and arrows, and random words you can edit after adding them to your image or video. The ‘T’ icon unlocks title cards that can help you tell your story—the text on these cards is also editable. The last option, a music note, is how you add a song from iTunes or an Apple-supplied tune to your video.

It takes awhile to get to know the various tools and tricks to make Clips work for you, but you’ve got this. And remember that creating a clip in Clips doesn’t mean that video goes anywhere but your Camera Roll. You have to take extra steps to share it with anyone or on any platform, which makes it extremely low-pressure.

The best Clips features

 

Clips has a few features that set it apart from other video-editing apps, the most notable of which is Live Titles. That’s what Apple calls its real-time captioning tool, which is designed to make your video totally watchable without sound. This is perfect for scrolling through Facebook’s auto-playing News Feed, but also improves accessibility, making videos easy to watch for viewers who are deaf or hard of hearing. Live Titles supports 36 languages at launch, which is a feat for a brand-new app.

The Live Titles feature doesn’t always nail the speech-to-text translation, though. I didn’t experience any captioning errors in my tests, but if you speak quickly and run your words together, you might confuse the algorithm parsing your sentences. Speak slowly and enunciate to avoid having to edit your captions. (I actually didn’t know this was possible, but the Wall Street Journal’s Joanna Stern discovered that you can edit a caption by tapping on the video clip, then pausing the video where the error appears on screen and tapping on the text. Yeah, it’s kind of a process.)

But basically everything you see on screen is editable, which is incredibly useful. Every bit of text can be changed and even emojis can be easily swapped out by tapping on the emoji on-screen and then tapping again to access your emoji keyboard.

In your first few hours with Clips, it’ll feel a little burdensome. But once you figure it out, creating social videos with Clips is a cinch.

Time to share

Once your masterpiece is finished, it’s time to share it. Clips uses facial recognition to figure out who’s in your video and then suggests that you use iMessage to send your video to those friends, which is really cool.

You can also share a clip via email or post it to your go-to social networks, minus Snapchat. Snapchat is not designed for sharing what’s essentially a short social movie (not to mention the fact that clips are square and snaps are vertical). But clips seem tailor-made for sharing on Facebook in particular. Imagine creating movies of your kids or making your own DIY Tasty food recipe video with Clips. Post them to your page and watch the likes roll in.

It’s a good thing Apple didn’t try to build a social network around Clips (lesson learned from Ping, perhaps). Instead, Apple is doing what it does best: giving creators the tools they need to make good work. Right now, popular media tends to be short and shareable. With Clips, maybe you too can snag 15 minutes—or more likely seconds—of viral video fame.

What do you think of Apple Clips? Let us hear from you in the comments below!

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: