X
Tech

Telstra blames faulty vendor equipment for Eftpos outage

Telstra worked through Friday night to replace 'faulty vendor equipment' after an M2M data outage took down Eftpos machines and ATMs from midday Friday until almost 1pm Saturday.
Written by Corinne Reichert, Contributor

Telstra's machine-to-machine (M2M) data outage that took down Eftpos machines and ATMs has finally been resolved, with the telco blaming "faulty vendor equipment" for the issues that initially hit customers at midday on Friday.

"We're currently experiencing an issue with some enterprise customer machine to machine (M2M) data services, which is impacting services including EFTPOS devices and ATMs," Telstra tweeted on Friday at 12.40pm AEDT.

Telstra then tweeted at 6.57pm AEDT Friday that it was seeing "some improvement" on the M2M data services issue, saying it had given the issue its highest priority but was "still working on full restoration".

At 8.40am AEDT Saturday, Telstra tweeted that it was making progress on restoring M2M data services after its team "worked through the night to re-establish connectivity through the replacement of faulty vendor equipment".

"We're gradually reintroducing traffic to the link, but there are devices that continue to be impacted. We continue to work on restoring all services as quickly as possible," Telstra tweeted on Saturday morning.

Telstra finally restored services as of 12.35pm AEDT on Saturday.

"All devices should now be capable of connecting -- a small number of devices may require a restart to reconnect and we're working this through with our customers," Telstra tweeted at 12.38pm AEDT Saturday.

Earlier in the week, Telstra had also experienced a cloud services outage, which impacted enterprise customers as well as access to some of its online services.

"Services are back online and should be working normally. We sincerely apologise for the impact and we continue to investigate the cause," a Telstra spokesperson said in a statement on Wednesday.

The statement came six hours after the telco originally confirmed the cloud services outage on Twitter.

The pair of outages this week follows the Australian Communications and Media Authority (ACMA) last week saying its investigation into Triple Zero emergency call services had found that Telstra breached the rule to ensure all 000 calls on its network are carried to emergency call operators.

According to the ACMA, Telstra failed to deliver 1,433 calls to the emergency service operator on May 4 due to a network outage, breaching s22 of the Telecommunications (Emergency Call Service) Determination 2009 and the Telecommunications (Consumer Protection and Service Standards) Act 1999.

The outage had been caused by fire damage to fibre cables, causing mobile voice connection interruptions across New South Wales, Victoria, South Australia, and Queensland for a period of around nine hours.

In June, Telstra wholesale mobile virtual network operator (MVNO) customers were also impacted across 3G and 4G services as a result of a "vendor platform issue".

The wholesale mobile outage followed a fibre cable cut earlier in June, which affected wholesale mobile and fixed-line services and several thousand broadband and ADSL services.

Telstra in May also said yet another mobile outage was caused by a software fault, which it said "triggered multiple elements across the network to fail". Its 4G voice network was also affected following "technical changes made ahead of upgrades to mobile traffic control equipment in Telstra's Exhibition Street exchange in Melbourne" in early May.

Related Coverage


Editorial standards