FAQ
Security and Compliance
Questions

Questions

Please summaries or attach your application vulnerability management processes and procedures?

At viaSocket, our vulnerability management approach prioritizes protection at the network, application, and code levels, leveraging leading cloud and security platforms. Our process includes:

  • Perimeter Protection: We use Cloudflare WAF to mitigate critical vulnerabilities such as SQL Injection, Cross-Site Scripting (XSS), and Distributed Denial-of-Service (DDoS) attacks. Rate limiting and IP reputation controls are enabled to defend against abusive traffic.

  • Cloud Security: Our backend runs on Google Cloud Platform (GCP) in Google Kubernetes Engine (GKE), with no public internal IPs, VPC isolation, and IAM role management. GCP Security Command Center flags misconfigurations or security risks.

  • Authentication: We use OAuth for secure authentication, and all API communication is encrypted over HTTPS.

  • Monitoring: Atatus, Cloudflare, and GCP tools provide real-time performance, error, and security monitoring, helping us detect anomalies and investigate security issues quickly.

  • Incident Response: We monitor runtime systems continuously and act based on predefined alerting rules. Our team uses GCP and Atatus logs to respond to suspicious behavior or security events.

Ongoing Improvements:
We are currently integrating automated tools to improve our handling of:

  • Vulnerable dependencies (SCA) – tools like Snyk or Trivy

  • Static code vulnerabilities (SAST) – tools like Semgrep

These upgrades will ensure we catch vulnerabilities early in development and enhance compliance with common industry expectations.

Application Vulnerability Management
Jun 12, 2025

At viaSocket, all employees operate from a secure in-office environment. Each team member is responsible for managing their own workstation. While we do not currently use a centralized endpoint management solution, access to production systems is strictly limited and controlled through secure methods. Workstations do not connect directly to production infrastructure. All access is mediated through secure cloud environments (GCP/GKE) and is gated via SSH keys, VPNs, and role-based permissions.

Although device configurations are not centrally enforced, we maintain internal standards and encourage all employees to follow security best practices, including:

  • Use of strong login passwords and automatic system lock after inactivity

  • Limiting administrative privileges to reduce the risk of privilege escalation

  • Use of secure office networks with firewall-level protections

Employees are also trained on general security hygiene and safe software development practices. All development and operations workflows occur within secured environments, such as GCP-hosted containers, ensuring minimal reliance on local execution or sensitive local storage.

We maintain a strict policy that no sensitive or private customer data is stored on endpoint devices. All sensitive operations are conducted through secure cloud infrastructure, and customer data remains encrypted and contained within GCP-managed services.

  • Access to sensitive data is restricted via SSH key-based authentication, segregated user credentials, and limited access roles

  • Employees do not have local access to databases, secrets, or production logs

  • Shared credentials and sensitive tokens are stored securely in cloud-managed environments and not distributed to individual machines

This policy is enforced through technical design — we architect systems to never expose sensitive data at the endpoint level. Combined with secure defaults in our cloud infrastructure and clear internal guidelines, this ensures that the risk of endpoint-based data exposure is effectively mitigated.

Endpoint Security - End User
Jun 12, 2025

Yes, our production environment is hosted within a Google Cloud VPC, which provides a secure, isolated network environment. While we use a single VPC for both testing and production, service.s are logically separated and access is tightly controlled through firewall rules, IAM policies, and namespace-level isolation within Google Kubernetes Engine (GKE).

No internal APIs, databases, or backend services are publicly exposed. All such components are assigned private IP addresses only, and communication is restricted within the cluster or VPC using Kubernetes network policies and GCP firewall rules, ensuring secure, segmented access even within a shared network.

All network configuration changes (such as updates to VPC rules, firewall settings, or IP access control lists) are performed manually but undergo multiple layers of review before implementation. Changes are reviewed by relevant engineers and release managers, ensuring that no modifications are applied without proper oversight and risk assessment. This review process ensures network changes align with our security and operational standards.

Yes, all network traffic to and from the production infrastructure over public networks is secured using cryptographically sound encryption protocols, primarily HTTPS with TLS 1.2/1.3. We enforce HTTPS at the edge using Cloudflare, which proxies and secures all external-facing services. There are no plaintext connections to production systems over public networks, and no ports or services are exposed without encryption. For internal communication, GCP’s infrastructure provides encryption in transit by default, and traffic within Kubernetes (GKE) clusters is restricted to private, secured channels.

Network Security
Jun 12, 2025


All data in transit over public networks is secured using TLS 1.2 or higher, enforced via Cloudflare and Google Cloud. All public-facing APIs and services are only accessible over HTTPS, ensuring strong encryption.

We use Google Cloud's default encryption at rest, which leverages AES-256 encryption for all data stored on disks, databases, and cloud-managed services (such as GKE, Cloud Storage, Cloud SQL, etc.). For additional protection, sensitive user information stored within our databases is explicitly encrypted at the application level using AES-256, ensuring double-layer protection beyond the infrastructure defaults.

We support multiple authentication methods, including Google OAuth and traditional email/password login. For users authenticating via email and password, we ensure password security by applying industry-standard cryptographic hashing and salting techniques using trusted libraries within the Node.js crypto module. Passwords are never stored in plaintext, and the hashing approach is designed to resist brute-force and rainbow table attacks.

Beyond password protection, all sensitive user data stored in our databases is encrypted at rest using AES-256 encryption, providing a robust layer of security for confidential information.

This combined approach ensures strong security controls around user credentials and sensitive data, leveraging both secure external authentication providers and best-practice cryptographic safeguards internally.



No, we do not use any custom cryptographic implementations. We rely entirely on well-established cryptographic standards and libraries provided by Google Cloud, Node.js, and trusted open-source libraries. This avoids the risks associated with designing or implementing cryptographic logic internally.

Cryptographic Design
Jun 12, 2025

Our Security Incident Response Program is designed to ensure timely detection, containment, and remediation of security incidents to minimize impact on our services and customers. The program includes:

  • Defined roles and responsibilities: We have a dedicated security and operations team responsible for incident investigation and management. Alerts from monitoring tools like Cloudflare and Atatus trigger immediate review.

  • Incident classification and prioritization: Incidents are categorized based on severity and potential impact, allowing us to allocate resources efficiently.

  • Incident handling procedures: We follow a structured process including identification, containment, eradication, recovery, and post-incident analysis.

  • Communication protocols: Internal notifications are sent promptly via Slack and email to relevant stakeholders. If necessary, we escalate incidents to senior leadership.

  • Documentation and reporting: All incidents are logged with details on cause, resolution steps, and lessons learned to improve future response.

We test our Incident Response Plan through periodic tabletop exercises and simulated scenarios involving key team members from security, operations, and development. These exercises occur at least bi-annually and are designed to validate the effectiveness of our procedures, communication, and coordination under realistic conditions.

Additionally, we review and update the plan after any significant incident or change to our infrastructure to ensure it remains current and effective.

We test our Incident Response Plan through periodic tabletop exercises and simulated scenarios involving key team members from security, operations, and development. These exercises occur at least bi-annually and are designed to validate the effectiveness of our procedures, communication, and coordination under realistic conditions.

Additionally, we review and update the plan after any significant incident or change to our infrastructure to ensure it remains current and effective.

Incident Response
Jun 12, 2025

Data exfiltration from production environments is tightly controlled. SSH access to production servers is restricted through IAM-based access control, and only a very limited set of authorized engineers are granted permission. All access is logged and monitored. Additionally, production environments are configured to disallow file extraction or external data transfers, and outbound internet access is disabled by default where not explicitly required. These measures collectively ensure that data movement from production systems is tightly regulated.

Although device configurations are not centrally enforced, we maintain internal standards and encourage all employees to follow security best practices, including:

  • Use of strong login passwords and automatic system lock after inactivity

  • Limiting administrative privileges to reduce the risk of privilege escalation

  • Use of secure office networks with firewall-level protections

Employees are also trained on general security hygiene and safe software development practices. All development and operations workflows occur within secured environments, such as GCP-hosted containers, ensuring minimal reliance on local execution or sensitive local storage.

We maintain a strict policy that no sensitive or private customer data is stored on endpoint devices. All sensitive operations are conducted through secure cloud infrastructure, and customer data remains encrypted and contained within GCP-managed services.

  • Access to sensitive data is restricted via SSH key-based authentication, segregated user credentials, and limited access roles

  • Employees do not have local access to databases, secrets, or production logs

  • Shared credentials and sensitive tokens are stored securely in cloud-managed environments and not distributed to individual machines

This policy is enforced through technical design — we architect systems to never expose sensitive data at the endpoint level. Combined with secure defaults in our cloud infrastructure and clear internal guidelines, this ensures that the risk of endpoint-based data exposure is effectively mitigated.

Endpoint Security - Production Server
Jun 12, 2025

Please summarise or attach your network vulnerability management processes and procedures?

We have a structured process in place for identifying, assessing, and addressing network and host-level vulnerabilities within our infrastructure.

  • Vulnerability Scanning:
    We use Google Cloud Security Command Center (SCC) to perform regular network and host vulnerability scans. These scans are conducted monthly to identify misconfigurations, exposed services, and known vulnerabilities across our infrastructure, including GCP-managed services and Kubernetes (GKE) nodes.

  • Threat Intelligence & Monitoring:
    We rely on integrated security feeds and alerts from GCP SCC, Cloudflare, and Atatus to stay aware of vulnerabilities relevant to our environment. These tools provide continuous monitoring for new threat vectors and suspicious activity, especially at the network and application layers.

  • Review & Mitigation:
    Identified vulnerabilities are triaged and reviewed by multiple team members, including engineers and release managers, to determine appropriate remediation. Patching decisions are prioritized based on severity, exploitability, and impact on production workloads.

  • Tracking & Accountability:
    We maintain an internal tool, Db Dash, to track, manage, and resolve vulnerabilities. This ensures visibility into the status of each issue and accountability for timely remediation.

  • Patch Management:
    While we do not currently use an automated patching system, all patches related to vulnerabilities identified through scans or alerts are manually assessed and applied as needed, with peer review and regression testing in our dedicated testing environment before production deployment.

This process ensures we proactively identify and manage risks in our cloud-hosted and containerized infrastructure, while also maintaining operational stability and compliance with basic security hygiene

Network/Host Vulnerability Management
Jun 12, 2025

1. Input Validation (Basic XSS/SQL Injection)

  • In any input field (e.g., form, search), enter:

    • "><script>alert(1)</script> → Should not execute any script.

    • ' OR 1=1-- → Should not affect login or queries.

  • Observe page behavior and console for script execution or server errors.


2. Session Management

  • Login → Close tab → Reopen and access the app → Should still be logged in if session valid.

  • Login → Wait 30+ minutes idle → Try again → App should timeout session.

  • After logout, try using browser back → Should not allow access to pages.


3. Access Control / IDOR (Insecure Direct Object Reference)

  • Login as User A → Copy a URL containing an object ID (/flow/123)

  • Login as User B → Try accessing that same URL → Should receive Access Denied or 404


4. Error Handling / Information Leakage

  • Force an error (e.g., disconnect network or corrupt URL) → App should show a friendly error page, not a stack trace, database name, or internal path.

  • Look at responses in dev tools → Ensure no sensitive info (tokens, env values, etc.) is leaked.


5. Role-based Access Control (RBAC)

  • Log in with different roles (Admin/User/Viewer, etc.)

  • Try accessing features or APIs not allowed for that role via direct link → App should block or hide.


6. File Upload Validation (if applicable)

  • Try uploading:

    • .exe or .php files → Should be blocked

    • Large files (e.g., >20MB) → Should show limit exceeded

    • Rename a .js file to .jpg → Should still be blocked

  • Ensure uploaded files can’t be directly accessed unless needed


7. Security Headers (Non-CF Controlled)

Use browser dev tools → Network → Any request → Response headers
Check for:

  • X-Frame-Options: DENY

  • X-Content-Type-Options: nosniff

  • Referrer-Policy: no-referrer or strict-origin-when-cross-origin

(CF may not always cover these fully; backend should ensure)


8. Token & API Security

  • Open browser dev tools → Look for JWT or auth tokens → Ensure they are:

    • Stored in secure, HTTP-only cookies (preferred)

    • Not exposed in localStorage/sessionStorage (avoid this if possible)

  • Try calling a few API endpoints manually with an expired/invalid token → Should return 401/403, not data.

QA checklist
Jun 12, 2025
Prev